id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
46,822,555
https://en.wikipedia.org/wiki/Neutral%20body%20posture
The neutral body posture (NBP) is the posture the human body naturally assumes in microgravity. Adopting any other posture while floating requires muscular effort. In the 1980s, NASA developed the Man-System Integration Standards (MSIS), a set of guidelines based on anthropometry and biomechanics, which included a definition of an average typical NBP created from measurements of crew members in the microgravity environment onboard Skylab. Still photographs taken on Skylab of crew members showed that in microgravity, the body assumed a distinguishable posture with the arms raised, the shoulder abducted, the knees flexed with noticeable hip flexion, and the foot plantar flexed. Later work by NASA based on research aboard Space Shuttle mission STS-57 found greater individual variations between crew members' neutral body positions than originally suggested by the earlier Skylab study. In general, three main postures were exhibited by the crew as a whole. These constituted (1) an almost standing posture, (2) a slightly pitched forward posture with an extreme bend at the knees, and (3) an elongated posture with a straight neck. Differences in posture exhibited in this study could be a result of the athletic bearing of the participants or the type of exercise, or both, and the amount of exercise regularly performed. Other differences may also stem from past physical injuries such as bone breaks and knee or shoulder injuries, and from gender differences such as center of gravity. No single crew member exhibited the typical NBP called out in the MSIS standard. The neutral body posture occurs during a state of weightlessness and minimizes the body's need to support itself against the pull of gravity. This offloads musculoskeletal stress and reduces pressure on the diaphragm and spine. Neutral body posture supports the natural curvature of the spine. A neutral spine that is not experiencing mechanical stress will curve inward at the neck (cervical region), outward at the upper back (thoracic region), and inward at the lower back (lumbar region). NASA standards for the neutral body position have informed seat design for commercial vehicle manufacturers. In 2005, engineers and scientists at Nissan Motor Company used NBP research in the development of driver's seats in their new vehicles. See also Fetal position References Anthropometry Biomechanics Human positions Posture
Neutral body posture
[ "Physics", "Biology" ]
482
[ "Biomechanics", "Behavior", "Human positions", "Mechanics", "Human behavior" ]
46,827,481
https://en.wikipedia.org/wiki/Marine%20Well%20Containment%20Company
Marine Well Containment Company (MWCC) provides well containment equipment and technology in the deepwater U.S. Gulf of Mexico for use after blowouts. It is based in Houston, Texas. MWCC members are major companies in the petroleum industry that drill wells in the Gulf of Mexico including: BP, Shell Oil, ExxonMobil, Chevron Corporation and ConocoPhillips. In 2010, in response to the Deepwater Horizon oil spill, ExxonMobil, Chevron Corporation, ConocoPhillips and Shell Oil committed to providing a new containment response capability. These founding companies of MWCC recognized the need to be better prepared in the event of a deepwater well control incident. MWCC introduced the Interim Containment System in February 2011, improving previous response capabilities and helping the industry get back to work in the U.S. Gulf of Mexico. In January 2015, MWCC accepted its Expanded Containment System forming the MWCC Containment System which builds upon the past system capabilities. The Containment System is available for use in the deepwater U.S. Gulf of Mexico in depths from 500 feet to 10,000 feet, temperatures up to and pressures up to 20k psi. The system can cap or cap and flow an incident well and has the capacity to process up to 100,000 barrels of liquid per day and up to 200 million cubic feet of gas per day. Additionally, it is able to store up to 700,000 barrels of liquid in each of its two Modular Capture Vessels. The liquid is then brought onshore for further processing via shuttle tankers. MWCC maintains components of the Containment System in a ready state at two shore base locations along the Gulf Coast of the United States and conducts drills and training sessions on a regular basis to ensure personnel and equipment are ready to respond. TechnipFMC manages and operates the subsea containment system at SURF shore base in Theodore, Alabama. Kiewit Corporation (KOS) operates the MCV shore base at Ingleside, Texas. The Containment System is designed to be flexible, adaptable and ready to be mobilized upon incident notification. References 2010 establishments in Texas BP Companies based in Houston Deepwater Horizon oil spill Petroleum industry
Marine Well Containment Company
[ "Chemistry" ]
444
[ "Chemical process engineering", "Petroleum", "Petroleum industry" ]
46,829,735
https://en.wikipedia.org/wiki/Eluxadoline
Eluxadoline, sold under the brand names Viberzi and Truberzi, is a medication taken by mouth for the treatment of diarrhea and abdominal pain in individuals with diarrhea-predominant irritable bowel syndrome (IBS-D). It was approved for use in the United States in 2015. The drug originated from Janssen Pharmaceutica and was developed by Actavis. Contraindications This drug is contraindicated in case of having: Blockage of the gallbladder or a sphincter of Oddi problem Problems with excessive alcohol use Pancreatitis Liver problems Chronic or severe constipation Adverse effects Common adverse effects are constipation and nausea, but rates of discontinuation due to constipation were low for both eluxadoline and placebo. Rare adverse effects: fatigue, bronchitis, viral gastroenteritis. Rare serious adverse effects include pancreatitis with a general incidence of 0.3%: higher incidence with 100 mg dose (0.3%) than with 75 mg dose (0.2%). The risk is even greater in those who do not have a gallbladder and the medication is not recommended in this group. In March 2017, the U.S. Food and Drug Administration issued a safety alert for eluxadoline concerning an increased risk of serious pancreatitis in patients without a gallbladder. An FDA review found that in such patients, spasm of the sphincter of Oddi may lead to severe pancreatitis. The FDA reported that in some cases symptoms have occurred with just one or two doses at the recommended dosage for patients without a gallbladder (75 mg). Of two deaths associated with eluxadoline reported up to February 2017, both occurred in patients without a gallbladder. Interactions Elevated concentrations of eluxadoline were observed with co-administration of inhibitors of the transporter protein OATP1B1, such as ciclosporin, gemfibrozil, certain antiretrovirals, rifampicin, and eltrombopag. Concurrent use of other drugs that cause constipation, such as opioids, alosetron, anticholinergics, and bismuth subsalicylate, is not preferred. Eluxadoline increases the concentrations of drugs which are OATP1B1 and BCRP substrates. Co-administration of eluxadoline with rosuvastatin may increase the risk of rhabdomyolysis. Pharmacology Mechanism of action Eluxadoline is a μ- and κ-opioid receptor agonist and δ-opioid receptor antagonist that acts locally in the enteric nervous system, possibly decreasing adverse effects on the central nervous system. Pharmacokinetics In the in vitro studies, eluxadoline was found to be transported by OAT3 (SLC22A8), OATP1B1 (SLCO1B1), and BSEP (ABCB11) at the highest concentrations tested (400 ng/ml, which is 162-fold larger than the observed Cmax of the highest therapeutic dose of 100 mg). However, it was not to be transported by OCT1 POU2F1, OAT1 (organic anion transporter 1), OCT2, OATP1B3 (SLCO1B3), P-gp (P-glycoprotein), or BCRP (ABCG2). Multidrug resistance-associated protein 2 (MRP2)-vesicular accumulation of eluxadoline was observed, indicating that the drug is a substrate of MRP2. Eluxadoline was not found to inhibit BCRP-, BSEP-, MRP2-, OCT1-, OCT2-, OAT1-, OAT3-, or OATP1B3-mediated transport of probe substrates but inhibited the transport of probe substrates of OATP1B1 and P-gp. In the in vitro studies, it was observed that eluxadoline is an in vivo substrate of OATP1B1, OAT3, and MRP2. Finally, no inhibition or induction of cytochrome P450 enzymes was observed. Following a 100 mg dose of eluxadoline, the Cmax was about 2 to 4 ng/ml and AUC was 12 to 22 ng.h/ml. Eluxadoline has linear pharmacokinetics with no accumulation upon repeated twice daily dosing. Taking eluxadoline with high fat meal decreased the Cmax by 50% and AUC by 60%. Chemistry Synthesis The synthesis of eluxadoline was published in 2006. References Amines Antidiarrhoeals Carbamates Delta-opioid receptor antagonists Diarrhea Imidazoles Drugs developed by AbbVie Janssen Pharmaceutica Kappa-opioid receptor agonists Mu-opioid receptor agonists Peripherally selective drugs
Eluxadoline
[ "Chemistry" ]
1,062
[ "Amines", "Bases (chemistry)", "Functional groups" ]
46,830,895
https://en.wikipedia.org/wiki/Predicted%20no-effect%20concentration
The predicted no-effect concentration (PNEC) is the concentration of a chemical which marks the limit at which below no adverse effects of exposure in an ecosystem are measured. PNEC values are intended to be conservative and predict the concentration at which a chemical will likely have no toxic effect. They are not intended to predict the upper limit of concentration of a chemical that has a toxic effect. PNEC values are often used in environmental risk assessment as a tool in ecotoxicology. A PNEC for a chemical can be calculated with acute toxicity or chronic toxicity single-species data, Species Sensitivity Distribution (SSD) multi-species data, field data or model ecosystems data. Depending on the type of data used, an assessment factor is used to account for the confidence of the toxicity data being extrapolated to an entire ecosystem. Calculation methods Assessment factor The use of assessment factors allows for laboratory, single-species and short term toxicity data to be extrapolated to conservatively predict ecosystem effects and accounts for the uncertainty in the extrapolation. The value of the assessment factor is dependent on the uncertainty of the available data and ranges from 1-1000. Acute toxicity data Acute toxicity data includes LC50 and EC50 data. This data is frequently screened for quality, relevancy and ideally contains data for species in multiple trophic levels and/or taxonomic groups. The lowest LC50 in the compiled database is then divided by the assessment factor to calculate the PNEC for that data. The assessment factor applied to acute toxicity data is typically 1000. Chronic toxicity data Chronic toxicity data includes NOEC data. The lowest NOEC value in the test dataset is divided by an assessment factor between 10 and 100 dependent on the diversity of test organisms and the amount of data available. If there are more species or data, the assessment factor is lower. Species sensitivity data A PNEC may also be statistically derived from a SSD which is a model of the variability in the sensitivity of multiple species to a single toxicant or other stressor. The hazardous concentration for five percent of the species (HC5) in the SSD is used to derive the PNEC. The HC5 is the concentration at which five percent of the species in the SSD exhibit an effect. The HC5 is typically divided by an assessment factor of 1-5. In many cases, SSDs may not exist due to the lack of data on a large number of species. In these cases, the assessment factor approach to derivation of a PNEC should be used. Field data or model ecosystems Field data or model ecosystems data includes field toxicity data and mesocosm toxicity. The magnitude of the assessment factor is study-specific in these types of studies. Applications Environmental risk assessment PNEC is used extensively in Europe by the European Chemicals Agency, the Registration, Evaluation, Authorisation and Restriction of Chemicals program and other toxicology agencies to assess environmental risk. PNEC values can be used in conjunction with predicted environmental concentration values to calculate a risk characterization ratio (RCR), also called a Risk Quotient (RQ). RCR is equal to the PEC divided by the PNEC for a specific chemical and is a deterministic approach to estimating environmental risk at local or regional scales. If the PNEC exceeds the PEC, the conclusion is that the chemical poses no environmental risk. Assumptions Derivation of PNEC for use in environmental risk lacks some scientific validity because the assessment factors are derived empirically. Additionally, PNECs derived from single-species toxicity data also assume that ecosystems are as sensitive as the most sensitive species and that the ecosystem function is dependent on the ecosystem structure. References Chemical safety Concentration indicators Environmental toxicology
Predicted no-effect concentration
[ "Chemistry", "Environmental_science" ]
758
[ "Chemical accident", "Toxicology", "Environmental toxicology", "nan", "Chemical safety" ]
46,833,807
https://en.wikipedia.org/wiki/Merit%20good
The economics concept of a merit good, originated by Richard Musgrave (1957, 1959), is a commodity which is judged that an individual or society should have on the basis of some concept of benefit, rather than ability and willingness to pay. The term is, perhaps, less often used presently than it was during the 1960s to 1980s but the concept still motivates many economic actions by governments. Examples include in-kind transfers such as the provision of food stamps to assist nutrition, the delivery of health services to improve quality of life and reduce morbidity, and subsidized housing and education. Definition A merit good can be defined as a good which would be under-consumed (and under-produced) by a free market economy, due to two main reasons: When consumed, a merit good creates positive externalities (an externality being a third party/spill-over effect of the consumption or production of the good/service). This means that there is a divergence between private benefit and public benefit when a merit good is consumed (i.e. the public benefit is greater than the private benefit). However, as consumers only take into account private benefits when consuming most goods, it means that they are under-consumed (and so under-produced). Individuals are short-term utility maximisers and so do not take into account the long term benefits of consuming a merit good, so they are under-consumed. Justification In many cases, merit goods provide services which should apply universally to everyone in a particular situation, an opinion that is similar to that of the concept of primary goods found in work by philosopher John Rawls or discussions about social inclusion. Lester Thurow claims that merit goods (and in-kind transfers) are justified based on "individual-societal preferences": just as we, as a society, permit each adult citizen an equal vote in elections, we should also entitle each person an equal right to life, and hence an equal right to life-saving medical care. On the supply side, it is sometimes suggested that there will be more endorsement in society for implicit redistribution via the provision of certain kinds of goods and services, rather than explicit redistribution through income. It is sometimes suggested that society in general may be in a better position to determine what individuals need, since individuals might act irrationally (for example, poor people receiving monetary transfers might use them to buy alcoholic drinks rather than nutritious food). Sometimes, merit and demerit goods (goods which are considered to affect the consumer negatively, but not society in general) are simply considered as an extension of the idea of externalities. A merit good may be described as a good that has positive externalities associated with it. Thus, an inoculation against a contagious disease may be considered as a merit good, because others who may not catch the disease from the inoculated person also benefit. However, merit and demerit goods can be defined in a different manner without reference to externalities. Consumers can be considered to under-consume merit goods (and over-consume demerit goods) due to an information failure. This happens because most consumers do not perceive quite how good or bad the good is for them: either they do not have the right information or lack relevant information. With this definition, a merit good is defined as a good that is better for a person than the person who may consume the good realises. Other possible rationales for treating some commodities as merit (or demerit) goods include public-goods aspects of a commodity, imposing community standards (prostitution, drugs, etc.), immaturity or incapacity, and addiction. A common element of all of these is recommending for or against some goods on a basis other than consumer choice. For the case of education, it can be argued that those lacking education are incapable of making an informed choice about the benefits of education, which would warrant compulsion (Musgrave, 1959, 14). In this case, the implementation of consumer sovereignty is the motivation, rather than rejection of consumer sovereignty. Public Choice Theory suggests that good government policies are an under-supplied merit good in a democracy. Criticism Arguments about the irrational behavior of welfare receivers are often criticised for being paternalistic, often by those who would like to reduce to a minimum economic activity by government. The principle of consumer sovereignty in welfare also suggests that monetary transfers are preferable to in-kind transfers of the same cost. References Richard A. Musgrave (1957). "A Multiple Theory of Budget Determination," FinanzArchiv, New Series 25(1), pp. 33–43. _ (1959). The Theory of Public Finance, pp. 13–15. _ (1987). "merit goods,", " The New Palgrave: A Dictionary of Economics, v. 3, pp. 452-53. Richard A. Musgrave and Peggy B. Musgrave (1973). Public Finance in Theory and Practice, pp. 80-81. Roger Lee Mendoza ([2007] 2011). "Merit Goods at Fifty: Reexamining Musgrave's Theory in the Context of Health Policy." Review of Economic and Business Studies, v. 4 (2), pp. 275–284. Amartya K. Sen ([1977] 1982). "Rational Fools: A Critique of the Behavioral Foundations of Economic Theory," in Choice, Welfare and Measurement, pp. 84–106. (1977 JSTOR version) Goods (economics)
Merit good
[ "Physics" ]
1,131
[ "Materials", "Goods (economics)", "Matter" ]
50,887,566
https://en.wikipedia.org/wiki/Access%20control%20expression
An access control expression with respect to a computer file system is a list of Boolean expressions attached to a file object. An access control expression specifies a Boolean formula that defines which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Each entry in a typical access control expression specifies an operation and an expression and an operation. For instance, if a file object has an access control expression that contains (read=(g:system OR u:Alice), write=(g:system AND !u:Bob))), this would give any member of the group or the user named Alice permission to read the file but would allow only members of the group to write the file, except for the user named Bob. Conventional access control lists can be viewed as a subset of access control expressions in which the only combining operation allowed is OR. Implementations Few systems implement access control expressions. The MapR file system is one such system. Move Toward Filesystem Access Control Expressions Early Unix and Unix-like systems pioneered flexible permission schemes based on user and group membership. Initially, users could only belong to a single group, but this constraint was relaxed to allow membership in multiple groups. With an unlimited number of groups, arbitrarily complex permission schemes could be implemented, but only at the cost of exponentially many groups. In order to allow more expressivity in the specification of filesystem permissions, a number of competing access control list implementations were developed for Microsoft Windows and Unix and Unix-like systems Linux. Access control lists were a substantial improvement over simple user and group permissions, but still could not easily express some common requirements (such as banning a single user from a group). Access control expressions were developed in response to such needs. Comparison to access control lists The permission expressions supported by access control lists are a strict subset of those supported by access control expressions, but they have the virtue of being very fast and direct to implement. The cost of implementing access control expressions is no longer of much concern due to advances in hardware performance. See also Cacls Capability-based security Discretionary access control Role-based access control References Further reading Computer access control
Access control expression
[ "Engineering" ]
446
[ "Cybersecurity engineering", "Computer access control" ]
50,889,773
https://en.wikipedia.org/wiki/Solar-assisted%20heat%20pump
A solar-assisted heat pump (SAHP) is a machine that combines a heat pump and thermal solar panels and/or PV solar panels in a single integrated system. Typically these two technologies are used separately (or only placing them in parallel) to produce hot water. In this system the solar thermal panel performs the function of the low temperature heat source and the heat produced is used to feed the heat pump's evaporator. The goal of this system is to get high coefficient of performance (COP) and then produce energy in a more efficient and less expensive way. It is possible to use any type of solar thermal panel (sheet and tubes, roll-bond, heat pipe, thermal plates) or hybrid (mono/polycrystalline, thin film) in combination with the heat pump. The use of a hybrid panel is preferable because it allows covering a part of the electricity demand of the heat pump and reduce the power consumption and consequently the variable costs of the system. Optimization The operating conditions' optimization of this system is the main challenge, because there are two opposing trends of the performance of the two sub-systems: by way of example, decreasing the evaporation temperature of the working fluid increases the thermal efficiency of the solar panel but decreases the performance of the heat pump, and consequently the COP. The target for the optimization is normally the minimization of the electrical consumption of the heat pump, or primary energy required by an auxiliary boiler which supplies the load not covered by a renewable source. Configurations There are two possible configurations of this system, which are distinguished by the presence or not of an intermediate fluid that transports the heat from the panel to the heat pump. Machines called indirect-expansion mainly use water as a heat transfer fluid, mixed with an antifreeze fluid (usually glycol) to avoid ice formation phenomena during winter period. The machines called direct-expansion place the refrigerant fluid directly inside the hydraulic circuit of the thermal panel, where the phase transition takes place. This second configuration, even though it is more complex from a technical point of view, has several advantages: a better transfer of the heat produced by the thermal panel to the working fluid which involves a greater thermal efficiency of the evaporator, linked to the absence of an intermediate fluid; presence of an evaporating fluid allows a uniform temperature distribution in the thermal panel with a consequent increase in the thermal efficiency (in normal operating conditions of the solar panel, the local thermal efficiency decreases from inlet to outlet of the fluid because the fluid temperature increases); using hybrid solar panel, in addition to the advantage described in the previous point, the electrical efficiency of the panel increases (for similar considerations). Comparison Generally speaking the use of this integrated system is an efficient way to employ the heat produced by the thermal panels in winter period, something that normally would not be exploited because its temperature is too low. Separated production systems In comparison with only heat pump utilization, it is possible to reduce the amount of electrical energy consumed by the machine during the weather evolution from winter season to the spring, and then finally only use thermal solar panels to produce all the heat demand required (only in case of indirect-expansion machine), thus saving on variable costs. In comparison with a system with only thermal panels, it is possible to provide a greater part of the required winter heating using a non-fossil energy source. Traditional heat pumps Compared to geothermal heat pumps, the main advantage is that the installation of a piping field in the soil is not required, which results in a lower cost of investment (drilling accounts for about 50% of the cost of a geothermal heat pump system) and in more flexibility of machine installation, even in areas in which there is limited available space. Furthermore, there are no risks related to possible thermal soil impoverishment. Similarly to air source heat pumps, solar-assisted heat pump performance is affected by atmospheric conditions, although this effect is less significant. Solar-assisted heat pump performance is generally affected by varying solar radiation intensity rather than air temperature oscillation. This produces a greater SCOP (Seasonal COP). Additionally, evaporation temperature of the working fluid is higher than in air source heat pumps, so in general the coefficient of performance is significantly higher. Low temperature conditions In general, a heat pump can evaporate at temperatures below the ambient temperature. In a solar-assisted heat pump this generates a temperature distribution of the thermal panels below that temperature. In this condition thermal losses of the panels towards the environment become additional available energy to the heat pump. In this case it is possible that the thermal efficiency of solar panels is more than 100%. Another free-contribution in these conditions of low temperature is related to the possibility of condensation of water vapor on the surface of the panels, which provides additional heat to the heat transfer fluid (normally it is a small part of the total heat collected by solar panels), that is equal to the latent heat of condensation. Heat pump with double cold sources The simple configuration of solar-assisted heat pump as only solar panels as heat source for the evaporator. It can also exist a configuration with an additional heat source. The goal is to have further advantages in energy saving but, on the other hand, the management and optimization of the system become more complex. The geothermal-solar configuration allows reducing the size of the piping field (and reduce the investment) and to have a regeneration of the ground during summer through the heat collected from the thermal panels. The air-solar structure allows an acceptable heat input also during cloudy days, maintaining the compactness of the system and the easiness to install it. Challenges As in regular air conditioners, one of the issues is to keep the evaporation temperature high, especially when the sunlight has low power and the ambient airflow is low. See also Renewable energy Renewable heat Heat pump Geothermal heat pump Refrigeration cycle Solar panel Photovoltaic thermal hybrid solar collector Solar thermal collector Energy conversion efficiency References External links Heat pumps Building engineering Energy conversion Energy recovery Energy technology
Solar-assisted heat pump
[ "Engineering" ]
1,234
[ "Building engineering", "Civil engineering", "Architecture" ]
50,890,026
https://en.wikipedia.org/wiki/Direct%20coupling%20analysis
Direct coupling analysis or DCA is an umbrella term comprising several methods for analyzing sequence data in computational biology. The common idea of these methods is to use statistical modeling to quantify the strength of the direct relationship between two positions of a biological sequence, excluding effects from other positions. This contrasts usual measures of correlation, which can be large even if there is no direct relationship between the positions (hence the name direct coupling analysis). Such a direct relationship can for example be the evolutionary pressure for two positions to maintain mutual compatibility in the biomolecular structure of the sequence, leading to molecular coevolution between the two positions. DCA has been used in the inference of protein residue contacts, RNA structure prediction, the inference of protein-protein interaction networks, the modeling of fitness landscapes, the generation of novel function proteins, and the modeling of protein evolution. Mathematical Model and Inference Mathematical Model The basis of DCA is a statistical model for the variability within a set of phylogenetically related biological sequences. When fitted to a multiple sequence alignment (MSA) of sequences of length , the model defines a probability for all possible sequences of the same length. This probability can be interpreted as the probability that the sequence in question belongs to the same class of sequences as the ones in the MSA, for example the class of all protein sequences belonging to a specific protein family. We denote a sequence by , with the being categorical variables representing the monomers of the sequence (if the sequences are for example aligned amino acid sequences of proteins of a protein family, the take as values any of the 20 standard amino acids). The probability of a sequence within a model is then defined as where are sets of real numbers representing the parameters of the model (more below) is a normalization constant (a real number) to ensure The parameters depend on one position and the symbol at this position. They are usually called fields and represent the propensity of symbol to be found at a certain position. The parameters depend on pairs of positions and the symbols at these positions. They are usually called couplings and represent an interaction, i.e. a term quantifying how compatible the symbols at both positions are with each other. The model is fully connected, so there are interactions between all pairs of positions. The model can be seen as a generalization of the Ising model, with spins not only taking two values, but any value from a given finite alphabet. In fact, when the size of the alphabet is 2, the model reduces to the Ising model. Since it is also reminiscent of the model of the same name, it is often called Potts model. Even knowing the probabilities of all sequences does not determine the parameters uniquely. For example, a simple transformation of the parameters for any set of real numbers leaves the probabilities the same. The likelihood function is invariant under such transformations as well, so the data cannot be used to fix these degrees of freedom (although a prior on the parameters might do so). A convention often found in literature is to fix these degrees of freedom such that the Frobenius norm of the coupling matrix is minimized (independently for every pair of positions and ). Maximum Entropy Derivation To justify the Potts model, it is often noted that it can be derived following a maximum entropy principle: For a given set of sample covariances and frequencies, the Potts model represents the distribution with the maximal Shannon entropy of all distributions reproducing those covariances and frequencies. For a multiple sequence alignment, the sample covariances are defined as , where is the frequency of finding symbols and at positions and in the same sequence in the MSA, and the frequency of finding symbol at position . The Potts model is then the unique distribution that maximizes the functional The first term in the functional is the Shannon entropy of the distribution. The are Lagrange multipliers to ensure , with being the marginal probability to find symbols at positions . The Lagrange multiplier ensures normalization. Maximizing this functional and identifying leads to the Potts model above. This procedure only gives the functional form of the Potts model, while the numerical values of the Lagrange multipliers (identified with the parameters) still have to be determined by fitting the model to the data. Direct Couplings and Indirect Correlation The central point of DCA is to interpret the (which can be represented as a matrix if there are possible symbols) as direct couplings. If two positions are under joint evolutionary pressure (for example to maintain a structural bond), one might expect these couplings to be large because only sequences with fitting pairs of symbols should have a significant probability. On the other hand, a large correlation between two positions does not necessarily mean that the couplings are large, since large couplings between e.g. positions and might lead to large correlations between positions and , mediated by position . In fact, such indirect correlations have been implicated in the high false positive rate when inferring protein residue contacts using correlation measures like mutual information. Inference The inference of the Potts model on a multiple sequence alignment (MSA) using maximum likelihood estimation is usually computationally intractable, because one needs to calculate the normalization constant , which is for sequence length and possible symbols a sum of terms (which means for example for a small protein domain family with 30 positions terms). Therefore, numerous approximations and alternatives have been developed: mpDCA (inference based on message passing/belief propagation) mfDCA (inference based on a mean-field approximation) gaussDCA (inference based on a Gaussian approximation) plmDCA (inference based on pseudo-likelihoods) bmDCA (inference based on Boltzmann machines) Adaptive Cluster Expansion All of these methods lead to some form of estimate for the set of parameters maximizing the likelihood of the MSA. Many of them include regularization or prior terms to ensure a well-posed problem or promote a sparse solution. Applications Protein Residue Contact Prediction A possible interpretation of large values of couplings in a model fitted to a MSA of a protein family is the existence of conserved contacts between positions (residues) in the family. Such a contact can lead to molecular coevolution, since a mutation in one of the two residues, without a compensating mutation in the other residue, is likely to disrupt protein structure and negatively affect the fitness of the protein. Residue pairs for which there is a strong selective pressure to maintain mutual compatibility are therefore expected to mutate together or not at all. This idea (which was known in literature long before the conception of DCA) has been used to predict protein contact maps, for example analyzing the mutual information between protein residues. Within the framework of DCA, a score for the strength of the direct interaction between a pair of residues is often defined using the Frobenius norm of the corresponding coupling matrix and applying an average product correction (APC): where has been defined above and . This correction term was first introduced for mutual information and is used to remove biases of specific positions to produce large . Scores that are invariant under parameter transformations that do not affect the probabilities have also been used. Sorting all residue pairs by this score results in a list in which the top of the list is strongly enriched in residue contacts when compared to the protein contact map of a homologous protein. High-quality predictions of residue contacts are valuable as prior information in protein structure prediction. Inference of protein-protein interaction DCA can be used for detecting conserved interaction between protein families and for predicting which residue pairs form contacts in a protein complex. Such predictions can be used when generating structural models for these complexes, or when inferring protein-protein interaction networks made from more than two proteins. Modeling of fitness landscapes DCA can be used to model fitness landscapes and to predict the effect of a mutation in the amino acid sequence of a protein on its fitness. External links Online services: EVcouplings Gremlin DCA Webservice AmoAi ELIHKSIR Source code: gplmDCA GaussDCA plmDCA Useful applications: DCA-MOL: a PyMOL plugin to analyze DCA results on a structure References Bioinformatics
Direct coupling analysis
[ "Engineering", "Biology" ]
1,683
[ "Bioinformatics", "Biological engineering" ]
58,540,473
https://en.wikipedia.org/wiki/Aspergillus%20salwaensis
Aspergillus salwaensis is a species of fungus in the genus Aspergillus. It is from the Circumdati section. The species was first described in 2014. It has been reported to produce penicillic acid. Growth and morphology A. salwaensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References salwaensis Fungi described in 2014 Fungus species
Aspergillus salwaensis
[ "Biology" ]
119
[ "Fungi", "Fungus species" ]
58,551,640
https://en.wikipedia.org/wiki/Giant%20GRB%20Ring
The Giant GRB Ring is a ring of 9 gamma-ray bursts (GRBs) that may be associated with one of the largest known cosmic structures. It was discovered in July 2015 by a team of Hungarian and American astronomers led by L.G. Balazs while analyzing data from different gamma-ray and X-ray telescopes, in particular the Swift Spacecraft. The ring of GRBs lies at a distance of about 2.8 gigaparsecs (9.1 billion light years) from Earth at the redshift between 0.78 and 0.86 and measures about 1.72 gigaparsecs (5.6 billion light years) in diameter, making it one of the largest structures known. Typically, the distribution of GRBs in the universe appears in the sets of less than the 2σ distribution, or with fewer than two GRBs in the average data of the point-radius system. Thus, such a concentration as this appears extremely unlikely, given accepted theoretical models. Proposals include the existence of a giant supergalactic structure. This would be an extremely huge structure of the universe, with a mean size of about 5.6 billion light years. Such a supercluster can explain the significant distribution of GRBs because of its tie to star formation. If such a structure did exist, it would be one of the largest structures of the observable universe. Discovery In early July 2015, after the discovery of the Hercules–Corona Borealis Great Wall, I. Horvath, J. Hakkila and Z. Bagoly, among others, conducted a further detailed analysis of the spatial distribution of GRBs within the distant universe. Provided by more than 15 years of data from the Swift Gamma-Ray Burst Mission, amongst other ground-based telescopes, they assessed the data to see if any more structures can be seen using the method of GRB correlation. They noticed a significant clustering of GRBs within z = 0.78–0.86, with nine GRBs concentrated in that region of 43 by 30 degrees of the sky. With further tests and analyses of the clustering, they found out that the sample had a higher concentration than the expected normal level, indication of a massive galactic structure within the vicinity. Characteristics The authors list the following characteristics for the 9 GRBs in the ring (l and b are standard Sun-referenced galactic coordinates). It is approximately 9.1 billion light years from Earth and about 5.6 billion light years across. See also List of largest cosmic structures List of largest galaxies References Galaxy superclusters Gamma-ray bursts Large-scale structure of the cosmos
Giant GRB Ring
[ "Physics", "Astronomy" ]
535
[ "Physical phenomena", "Astronomical events", "Galaxy superclusters", "Gamma-ray bursts", "Stellar phenomena", "Astronomical objects" ]
43,654,989
https://en.wikipedia.org/wiki/Lode%20coordinates
Lode coordinates or Haigh–Westergaard coordinates . are a set of tensor invariants that span the space of real, symmetric, second-order, 3-dimensional tensors and are isomorphic with respect to principal stress space. This right-handed orthogonal coordinate system is named in honor of the German scientist Dr. Walter Lode because of his seminal paper written in 1926 describing the effect of the middle principal stress on metal plasticity. Other examples of sets of tensor invariants are the set of principal stresses or the set of kinematic invariants . The Lode coordinate system can be described as a cylindrical coordinate system within principal stress space with a coincident origin and the z-axis parallel to the vector . Mechanics invariants The Lode coordinates are most easily computed using the mechanics invariants. These invariants are a mixture of the invariants of the Cauchy stress tensor, , and the stress deviator, , and are given by which can be written equivalently in Einstein notation where is the Levi-Civita symbol (or permutation symbol) and the last two forms for are equivalent because is symmetric (). The gradients of these invariants can be calculated by where is the second-order identity tensor and is called the Hill tensor. Axial coordinate The -coordinate is found by calculating the magnitude of the orthogonal projection of the stress state onto the hydrostatic axis. where is the unit normal in the direction of the hydrostatic axis. Radial coordinate The -coordinate is found by calculating the magnitude of the stress deviator (the orthogonal projection of the stress state into the deviatoric plane). where {| class="toccolours collapsible collapsed" width="50%" style="text-align:left" !Derivation |- |The relation that can be found by expanding the relation and writing in terms of the isotropic and deviatoric parts while expanding the magnitude of . Because is isotropic and is deviatoric, their product is zero. Which leaves us with Applying the identity and using the definition of |} is a unit tensor in the direction of the radial component. Lode angle – angular coordinate The Lode angle can be considered, rather loosely, a measure of loading type. The Lode angle varies with respect to the middle eigenvalue of the stress. There are many definitions of Lode angle that each utilize different trigonometric functions: the positive sine, negative sine, and positive cosine (here denoted , , and , respectively) and are related by {| class="toccolours collapsible collapsed" width="50%" style="text-align:left" !Derivation |- |The relation between and can be shown by applying a trigonometric identity relating sine and cosine by a shift . Because cosine is an even function and the range of the inverse cosine is usually we take the negative possible value for the term, thus ensuring that is positive. |} These definitions are all defined for a range of . The unit normal in the angular direction which completes the orthonormal basis can be calculated for and using . Meridional profile The meridional profile is a 2D plot of holding constant and is sometimes plotted using scalar multiples of . It is commonly used to demonstrate the pressure dependence of a yield surface or the pressure-shear trajectory of a stress path. Because is non-negative the plot usually omits the negative portion of the -axis, but can be included to illustrate effects at opposing Lode angles (usually triaxial extension and triaxial compression). One of the benefits of plotting the meridional profile with is that it is a geometrically accurate depiction of the yield surface. If a non-isomorphic pair is used for the meridional profile then the normal to the yield surface will not appear normal in the meridional profile. Any pair of coordinates that differ from by constant multiples of equal absolute value are also isomorphic with respect to principal stress space. As an example, pressure and the Von Mises stress are not an isomorphic coordinate pair and, therefore, distort the yield surface because and, finally, . Octahedral profile The octahedral profile is a 2D plot of holding constant. Plotting the yield surface in the octahedral plane demonstrates the level of Lode angle dependence. The octahedral plane is sometimes referred to as the 'pi plane' or 'deviatoric plane'. The octahedral profile is not necessarily constant for different values of pressure with the notable exceptions of the von Mises yield criterion and the Tresca yield criterion which are constant for all values of pressure. A note on terminology The term Haigh-Westergaard space is ambiguously used in the literature to mean both the Cartesian principal stress space and the cylindrical Lode coordinate space See also Yield (engineering) Plasticity (physics) Stress Henri Tresca von Mises stress Mohr–Coulomb theory Strain Strain tensor Stress–energy tensor Stress concentration 3-D elasticity References Solid mechanics Materials science
Lode coordinates
[ "Physics", "Materials_science", "Engineering" ]
1,050
[ "Solid mechanics", "Applied and interdisciplinary physics", "Materials science", "Mechanics", "nan" ]
43,657,450
https://en.wikipedia.org/wiki/Electrical%20conduit
An electrical conduit is a tube used to protect and route electrical wiring in a building or structure. Electrical conduit may be made of metal, plastic, fiber, or fired clay. Most conduit is rigid, but flexible conduit is used for some purposes. Conduit is generally installed by electricians at the site of installation of electrical equipment. Its use, form, and installation details are often specified by wiring regulations, such as the US National Electrical Code (NEC) and other building codes. Comparison with other wiring methods Electrical conduit provides very good protection to enclosed conductors from impact, moisture, and chemical vapors. Varying numbers, sizes, and types of conductors can be pulled into a conduit, which simplifies design and construction compared to multiple runs of cables or the expense of customized composite cable. Wiring systems in buildings may be subject to frequent alterations. Frequent wiring changes are made simpler and safer through the use of electrical conduit, as existing conductors can be withdrawn and new conductors installed, with little disruption along the path of the conduit. A conduit system can be made waterproof or submersible. Metal conduit can be used to shield sensitive circuits from electromagnetic interference, and also can prevent emission of such interference from enclosed power cables. Non-metallic conduits resist corrosion and are light-weight, reducing installation labor cost. When installed with proper sealing fittings, a conduit will not permit the flow of flammable gases and vapors, which provides protection from fire and explosion hazard in areas handling volatile substances. Some types of conduit are approved for direct encasement in concrete. This is commonly used in commercial buildings to allow electrical and communication outlets to be installed in the middle of large open areas. For example, retail display cases and open-office areas use floor-mounted conduit boxes to connect power and communications cables. Both metal and plastic conduit can be bent at the job site to allow a neat installation without excessive numbers of manufactured fittings. This is particularly advantageous when following irregular or curved building profiles. Special tube bending equipment is used to bend the conduit without kinking or denting it. The cost of conduit installation is higher than other wiring methods due to the cost of materials and labor. In applications such as residential construction, the high degree of physical damage protection may not be required, so the expense of conduit is not warranted. (In certain jurisdictions, such as Chicago, Illinois, the use of conduit is always required.) Conductors installed within conduit cannot dissipate heat as readily as those installed in open wiring, so the current capacity of each conductor must be reduced (derated) if many are installed in one conduit. It is impractical, and prohibited by wiring regulations, to have more than 360 degrees of total bends in a run of conduit, so special outlet fittings must be provided to allow conductors to be installed without damage in such runs. Some types of metal conduit may also serve as a useful bonding conductor for grounding (earthing), but wiring regulations may also dictate workmanship standards or supplemental means of grounding for certain types. While metal conduit may sometimes be used as a grounding conductor, the circuit length is limited. For example, a long run of conduit as grounding conductor may have too high an electrical resistance, and not allow proper operation of overcurrent devices on a fault. Types Conduit systems are classified by the wall thickness, mechanical stiffness, and material used to make the tubing. Materials may be chosen for mechanical protection, corrosion resistance, and overall cost of the installation (labor plus material cost). Wiring regulations for electrical equipment in hazardous areas may require particular types of conduit to be used to provide an approved installation. Metal Rigid metal conduit (RMC) is a thick-walled threaded tubing, usually made of coated steel, stainless steel or aluminum. Galvanized rigid conduit (GRC) is galvanized steel tubing, with a tubing wall that is thick enough to allow it to be threaded. Its common applications are in commercial and industrial construction. It is designed to protect wire and connectors. Intermediate metal conduit (IMC) is a steel tubing heavier than EMT but lighter than RMC. It may be threaded. Electrical metallic tubing (EMT), sometimes called thin-wall, is commonly used instead of galvanized rigid conduit (GRC), as it is less costly and lighter than GRC. EMT itself is not threaded, but can be used with threaded fittings that clamp to it. Lengths of conduit are connected to each other and to equipment with clamp-type fittings. Like GRC, EMT is more common in commercial and industrial buildings than in residential applications. EMT is generally made of coated steel, though it may be aluminum. Aluminum conduit, similar to galvanized steel conduit, is a rigid tube, generally used in commercial and industrial applications where a higher resistance to corrosion is needed. Such locations would include food processing plants, where large amounts of water and cleaning chemicals would make galvanized conduit unsuitable. Aluminum cannot be directly embedded in concrete, since the metal reacts with the alkalis in cement. The conduit may be coated to prevent corrosion by incidental contact with concrete. Aluminum conduit is generally lower cost than steel in addition to having a lower labor cost to install, since a length of aluminum conduit will have about one-third the weight of an equally-sized rigid steel conduit. Non-metal PVC conduit has long been considered the lightest in weight compared to steel conduit materials, and usually lower in cost than other forms of conduit. In North American electrical practice, it is available in thirteen different size and wall thicknesses, with the thin-wall variety only suitable for embedded use in concrete, and heavier grades suitable for direct burial and exposed work. Most of the various fittings made for metal conduit are also available in PVC form. The plastic material resists moisture and many corrosive substances, but since the tubing is non-conductive an extra bonding (grounding) conductor must be pulled into each conduit. PVC conduit may be heated and bent in the field, by using special heating tools designed for the purpose. Joints to fittings are made with slip-on solvent-welded connections, which set up rapidly after assembly and attain full strength in about one day. Since slip-fit sections do not need to be rotated during assembly, the special union fittings used with threaded conduit (such as Ericson) are not required. Since PVC conduit has a higher coefficient of thermal expansion than other types, it must be mounted to allow for expansion and contraction of each run. Care should be taken when installing PVC underground in multiple or parallel run configurations due to mutual heating effect of densely packed cables, because the conduit will deform when heated. Reinforced thermosetting resin conduit (RTRC) or fiberglass conduit is light in weight compared to metallic conduits, which contributes to lower labor costs. It is sometimes referred to as FRE which stands for "fiberglass reinforced epoxy", however this term is a legally registered trademark of FRE Composites. It may also provide lower material cost. RTRC conduit can be used in a variety of indoor and outdoor applications. Fiberglass conduit is available in multiple wall thicknesses to suit various applications and has a support distance very similar to steel. High temperature, low smoke, no flame, classified area (Class I Division 2), and zero halogen versions are also manufactured for specialty applications such as subway tunnels and stations and in the US can meet National Fire Protection Association (NFPA) 130 requirements. Like other non-metallic conduits, a bonding conductor may be required for grounding. Joints are epoxy-glued, which requires some installation labor and time for joints to set. RTRC conduit may not be bent in the field and appropriate fittings must be used to change directions, nor is RTRC conduit approved to support luminaires. Rigid nonmetallic conduit (RNC) is a non-metallic unthreaded smooth-walled tubing. Electrical nonmetallic tubing (ENT) is a thin-walled corrugated tubing that is moisture-resistant and flame retardant. It is pliable such that it can be bent by hand, and is often flexible although the fittings are not. It is not threaded due to its corrugated shape, although some fittings might be. Flexible Flexible conduits are used to connect to motors or other devices where isolation from vibration is useful, or where an excessive number of fittings would be needed to use rigid connections. Electrical codes may restrict the length of a run of some types of flexible conduit. Flexible metallic conduit (FMC, informally called greenfield or flex) is made by the helical coiling of a self-interlocked ribbed strip of aluminum or steel, forming a hollow tube through which wires can be pulled. FMC is used primarily in dry areas where it would be impractical to install EMT or other non-flexible conduit, yet where metallic strength to protect conductors is still required. The flexible tubing does not maintain any permanent bend, and can flex freely. FMC may be used as an equipment grounding conductor if specific provisions are met regarding the trade size and length of FMC used, depending on the amperage of the circuits contained in the conduit. In general, an equipment grounding conductor must be pulled through the FMC with an ampacity suitable to carry the fault current likely imposed on the largest circuit contained within the FMC. Liquidtight flexible metal conduit (LFMC) is a metallic flexible conduit covered by a waterproof plastic coating. The interior is similar to FMC. Flexible metallic tubing (FMT; North America) is not the same as flexible metallic conduit (FMC) which is described in US National Electrical Code (NEC) Article 348. FMT is a raceway, but not a conduit and is described in a separate NEC Article 360. It only comes in 1/2" & 3/4" trade sizes, whereas FMC is sized 1/2" ~ 4" trade sizes. NEC 360.2 describes it as: "A raceway that is circular in cross section, flexible, metallic and liquidtight without a nonmetallic jacket." Liquidtight flexible nonmetallic conduit (LFNC) refers to several types of flame-resistant non-metallic tubing. Interior surfaces may be smooth or corrugated. There may be integral reinforcement within the conduit wall. It is also known as FNMC. Underground Conduit may be installed underground between buildings, structures, or devices to allow installation of power and communication cables. An assembly of these conduits, often called a duct bank, may either be directly buried in earth, or encased in concrete (sometimes with reinforcing rebar to aid against shear forces). Alternatively, a duct bank may be installed in a utility tunnel. A duct bank will allow replacement of damaged cables between buildings or additional power and communications circuits to be added, without the expense of re-excavation of a trench. While metal conduit is occasionally used for burial, usually PVC, polyethylene or polystyrene plastics are now used due to lower cost, easier installation, and better resistance to corrosion. Formerly, compressed asbestos fiber mixed with cement (such as transite) was used for some underground installations. Telephone and communications circuits were typically installed in fired-clay conduit. Cost comparison Exact ratios of installation labor, weight and material cost vary depending on the size of conduit, but the values for 3/4 inch (21 metric) trade size (North America) are representative. Fittings Despite the similarity to pipes used in plumbing, purpose-designed electrical fittings are used to connect conduit. Box connectors join conduit to a junction box or other electrical box. A typical box connector is inserted into a knockout in a junction box, with the threaded end then being secured with a ring (called a lock nut) from within the box, as a bolt would be secured by a nut. The other end of the fitting usually has a screw or compression ring which is tightened down onto the inserted conduit. Fittings for non-threaded conduits are either secured with set screws or with a compression nut that encircles the conduit. Fittings for general purpose use with metal conduits may be made of die-cast zinc, but where stronger fittings are needed, they are made of copper-free aluminum or cast iron. Couplings connect two pieces of conduit together. Sometimes the fittings are considered sufficiently conductive to bond (electrically unite) the metal conduit to a metal junction box (thus sharing the box's ground connection); other times, grounding bushings are used which have bonding jumpers from the bushing to a grounding screw on the box. Unlike water piping, if the conduit is to be watertight, the idea is to keep water out, not in. In this case, gaskets are used with special fittings, such as the weatherhead leading from the overhead electrical mains to the electric meter. Flexible metal conduit usually uses fittings with a clamp on the outside of the box, just like bare cables would. Conduit bodies A conduit body can be used to provide pulling access in a run of conduit, to allow more bends to be made in a particular section of conduit, to conserve space where a full size bend radius would be impractical or impossible, or to split a conduit path into multiple directions. Conductors may not be spliced inside a conduit body, unless it is specifically listed for such use. Conduit bodies differ from junction boxes in that they are not required to be individually supported, which can make them very useful in certain practical applications. Conduit bodies are commonly referred to as condulets, a term trademarked by Cooper Crouse-Hinds company, a division of Cooper Industries. Conduit bodies come in various types, moisture ratings, and materials, including galvanized steel, aluminum, and PVC. Depending on the material, they use different mechanical methods for securing conduit. Among the types are: L-shaped bodies ("Ells") include the LB, LL, and LR, where the inlet is in line with the access cover and the outlet is on the back, left and right, respectively. In addition to providing access to wires for pulling, "L" fittings allow a 90 degree turn in conduit where there is insufficient space for a full-radius 90 degree sweep (curved conduit section). T-shaped bodies ("Tees") feature an inlet in line with the access cover and outlets to both the cover's left and right. C-shaped bodies ("Cees") have identical openings above and below the access cover, and are used to pull conductors in a straight runs as they make no turn between inlet and outlet. "Service Ell" bodies (SLBs), shorter ells with inlets flush with the access cover, are frequently used where a circuit passes through an exterior wall from outside to inside. Other wireways Surface mounted raceway (wire molding) This type of "decorative" conduit is designed to provide an aesthetically acceptable passageway for wiring without hiding it inside or behind a wall. This is used where additional wiring is required, but where going through a wall would be difficult or require remodeling. The conduit has an open face with removable cover, secured to the surface, and wire is placed inside. Plastic raceway is often used for telecommunication wiring, such as network cables in an older structure, where it is not practical to drill through concrete block. Advantages Allows adding new wiring to an existing building without removing or cutting holes into the drywall, lath and plaster, concrete, or other wall finish. Allows circuits to be easily locatable and accessible for future changes, thus enabling minimum effort upgrades. Disadvantages Appearance may not be acceptable to all observers. Trunking The term trunking is used in the United Kingdom for electrical wireways, generally rectangular in cross section with removable lids. Mini trunking is a term used in the UK for small form-factor (usually 6 mm to 25 mm square or rectangle sectioned) PVC wireways. In India, this trunking is available with self-fixing tape to ease installation. In some countries including Iran, the term 'Trunking' is a channel that allows installation of switches and sockets. In North American practice, wire trough and lay-in wireways are terms used to designate similar products. Wall duct raceway is the term for the type that can be enclosed in a wall. Innerducts Innerducts are subducts that can be installed in existing underground conduit systems to provide clean, continuous, low-friction paths for placing optical cables, which have relatively low pulling tension limits. They provide a means for subdividing conventional conduit that was originally designed for single, large-diameter metallic conductor cables into multiple channels for smaller optical cables. Innerducts are typically small-diameter, semi-flexible subducts. According to Telcordia GR-356, there are three basic types of innerduct: smoothwall, corrugated, and ribbed. These various designs are based on the profile of the inside and outside diameters of the innerduct. The need for a specific characteristic or combination of characteristics, such as pulling strength, flexibility, or the lowest coefficient of friction, dictates the type of innerduct required. Beyond the basic profiles or contours (smoothwall, corrugated, or ribbed), innerduct is also available in an increasing variety of multiduct designs. Multiduct may be either a composite unit consisting of up to four or six individual innerducts that are held together by some mechanical means, or a single extruded product having multiple channels through which to pull several cables. In either case, the multiduct is coilable, and can be pulled into existing conduit in a manner similar to that of conventional innerduct. Passive fire protection Conduit is of relevance to both firestopping, where they become penetrants, and fireproofing, where circuit integrity measures can be applied on the outside to keep the internal cables operational during an accidental fire. The British standard BS 476 also considers internal fires, whereby the fireproofing must protect the surroundings from cable fires. Any external treatments must consider the effect upon ampacity derating due to internal heat buildup. See also Panzergewinde Pipe thread References Bibliography External links Conduit definition How to Bend Conduit Using a Pipe Bender Cables Electrical wiring
Electrical conduit
[ "Physics", "Engineering" ]
3,989
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
43,658,031
https://en.wikipedia.org/wiki/Athanogene
The term Athanogene, derived from the Greek for "against death" ('athánatos), was incorporated into name of the gene Bcl-2-associated athanogene 1 (BAG-1; alias HAP46/BAG-1M) upon discovery of its ability to confer transfected cells with resistance to apoptosis. References Genes Apoptosis Cloning
Athanogene
[ "Chemistry", "Engineering", "Biology" ]
84
[ "Cloning", "Genetic engineering", "Apoptosis", "Signal transduction" ]
45,502,636
https://en.wikipedia.org/wiki/Sliding%20filament%20theory
The sliding filament theory explains the mechanism of muscle contraction based on muscle proteins that slide past each other to generate movement. According to the sliding filament theory, the myosin (thick filaments) of muscle fibers slide past the actin (thin filaments) during muscle contraction, while the two groups of filaments remain at relatively constant length. The theory was independently introduced in 1954 by two research teams, one consisting of Andrew Huxley and Rolf Niedergerke from the University of Cambridge, and the other consisting of Hugh Huxley and Jean Hanson from the Massachusetts Institute of Technology. It was originally conceived by Hugh Huxley in 1953. Andrew Huxley and Niedergerke introduced it as a "very attractive" hypothesis. Before the 1950s there were several competing theories on muscle contraction, including electrical attraction, protein folding, and protein modification. The novel theory directly introduced a new concept called cross-bridge theory (classically swinging cross-bridge, now mostly referred to as cross-bridge cycle) which explains the molecular mechanism of sliding filament. Cross-bridge theory states that actin and myosin form a protein complex (classically called actomyosin) by attachment of myosin head on the actin filament, thereby forming a sort of cross-bridge between the two filaments. The sliding filament theory is a widely accepted explanation of the mechanism that underlies muscle contraction. History Early works The first muscle protein discovered was myosin by a German scientist Willy Kühne, who extracted and named it in 1864. In 1939 a Russian husband and wife team Vladimir Alexandrovich Engelhardt and Militsa Nikolaevna Lyubimova discovered that myosin had an enzymatic (called ATPase) property that can break down ATP to release energy. Albert Szent-Györgyi, a Hungarian physiologist, turned his focus on muscle physiology after winning the Nobel Prize in Physiology or Medicine in 1937 for his works on vitamin C and fumaric acid. He demonstrated in 1942 that ATP was the source of energy for muscle contraction. He actually observed that muscle fibres containing myosin B shortened in the presence of ATP, but not with myosin A, the experience which he later described as "perhaps the most thrilling moment of my life." With Brunó Ferenc Straub, he soon found that myosin B was associated with another protein, which they called actin, while myosin A was not. Straub purified actin in 1942, and Szent-Györgyi purified myosin A in 1943. It became apparent that myosin B was a combination of myosin A and actin, so that myosin A retained the original name, whereas they renamed myosin B as actomyosin. By the end of the 1940s Szent-Györgyi's team had postulated with evidence that contraction of actomyosin was equivalent to muscle contraction as a whole. But the notion was generally opposed, even from the likes of Nobel laureates such as Otto Fritz Meyerhof and Archibald Hill, who adhered to the prevailing dogma that myosin was a structural protein and not a functional enzyme. However, in one of his last contributions to muscle research, Szent-Györgyi demonstrated that actomyosin driven by ATP was the basic principle of muscle contraction. Origin By the time Hugh Huxley earned his PhD from the University of Cambridge in 1952 on his research on the structure of muscle, Szent-Györgyi had turned his career into cancer research. Huxley went to Francis O. Schmitt's laboratory at the Massachusetts Institute of Technology with a post-doctoral fellowship in September 1952, where he was joined by another English post-doctoral fellow Jean Hanson in January 1953. Hanson had a PhD in muscle structure from King's College, London in 1951. Huxley had used X-ray diffraction to speculate that muscle proteins, particularly myosin, form structured filaments giving rise to sarcomere (a segment of muscle fibre). Their main aim was to use electron microscopy to study the details of those filaments as never done before. They soon discovered and confirmed the filament nature of muscle proteins. Myosin and actin form overlapping filaments, myosin filaments mainly constituting the A band (the dark region of a sarcomere), while actin filaments traverse both the A and I (light region) bands. Huxley was the first to suggest the sliding filament theory in 1953, stating: Later, in 1996, Huxley regretted that he should have included Hanson in the formulation of his theory because it was based on their collaborative work. Andrew Huxley, whom Alan Hodgkin described as "wizard with scientific apparatus", had just discovered the mechanism of the nerve impulse (action potential) transmission (for which he and Hodgkin later won the Nobel Prize in Physiology or Medicine in 1963) in 1949 using his own design of voltage clamp, and was looking for an associate who could properly dissect out muscle fibres. Upon recommendation of a close friend Robert Stämpfli, a German physician Rolf Niedergerke joined him at the University of Cambridge in 1952. By then he realised that the conventionally used phase-contrast microscope was not suitable for fine structures of muscle fibres, and thus developed his own interference microscope. Between March 1953 and January 1954 they executed their research. Huxley recollected that at the time the only person who ever thought of sliding filaments before 1953 was Dorothy Hodgkin (later winner of the 1964 Nobel Prize in Chemistry). He spent the summer of 1953 at Marine Biological Laboratory at Woods Hole, Massachusetts, to use electron microscope there. There he met Hugh Huxley and Hanson with whom he shared data and information on their works. They parted with an agreement that they would keep in touch, and when their aim is achieved, they would publish together, if they ever "reached similar conclusions". The sliding filament theory The sliding filament theory was born from two consecutive papers published on the 22 May 1954 issue of Nature under the common theme "Structural Changes in Muscle During Contraction". Though their conclusions were fundamentally similar, their underlying experimental data and propositions were different. Huxley-Niedergerke hypothesis The first paper, written by Andrew Huxley and Rolf Niedergerke, is titled "Interference microscopy of living muscle fibres". It was based on their study of frog muscle using interference microscope, which Andrew Huxley developed for the purpose. According to them: the I bands are composed of actin filaments, and the A bands principally of myosin filaments; and during contraction, the actin filaments move into the A bands between the myosin filaments. Huxley-Hanson hypothesis The second paper, by Hugh Huxley and Jean Hanson, is titled "Changes in the cross-striations of muscle during contraction and stretch and their structural interpretation". It is more elaborate and was based on their study of rabbit muscle using phase contrast and electron microscopes. According to them: the backbone of a muscle fibre is actin filaments which extend from the Z line up to one end of the H zone, where they are attached to an elastic component which they named "S filament"; myosin filaments extend from one end of the A band through the H zone up to the other end of the A band; myosin filaments remain in relatively constant length during muscle stretch or contraction; if myosin filaments contract beyond the length of the A band, their ends fold up to form contraction bands; myosin and actin filaments lie side by side in the A band and in the absence of ATP they do not form cross-linkages; during stretching, only the I bands and H zone increase in length, while A bands remain the same; during contraction, actin filaments move into the A bands and the H zone is filled up reducing its stretch, the I bands shorten, the Z line comes in contact with the A bands; and the possible driving force of contraction is the actin-myosin linkages which depend on ATP hydrolysis by the myosin. Reception and consequences In spite of strong evidence, the sliding filament theory did not gain any support for several years to come. Szent-Györgyi himself refused to believe that myosin filaments were confined to the thick filament (A band). F.O. Schmitt, whose electron microscope provided the best data, also remained sceptical of the original images. There were also immediate arguments as to the organisation of the filaments, whether the two sets (myosin and actin) of filaments were merely overlapping or continuous. It was only with the new electron microscope that Hugh Huxley confirmed the overlapping nature of the filaments in 1957. It was also from this publication that the existence of actin-myosin linkage (now called cross-bridge) was clearly shown. But he took another five years to provide evidence that the cross-bridge was a dynamic interaction between actin and myosin filaments. He obtained the actual molecular arrangement of the filaments using X-ray crystallography by teaming up with Kenneth Holmes, who was trained by Rosalind Franklin, in 1965. It was only after a conference in 1972 at Cold Spring Harbor Laboratory, where the theory and its evidence were deliberated, that it became generally accepted. At the conference, as Koscak Maruyama later recalled, Hanson had to answer the criticisms by shouting, "I know I cannot explain the mechanism yet, but the sliding is a fact." The factual proofs came in the early 1980s when it could be demonstrated the actual sliding motion using novel sophisticated tools by different researchers. Cross-bridge mechanism With substantial evidence, Hugh Huxley formally proposed the mechanism for sliding filament which is variously called swinging cross-bridge model, cross-bridge theory or cross-bridge model. (He himself preferred the name "swinging crossbridge model", because, as he recalled, "it [the discovery] was, after all, the 1960s".) He published his theory in the 20 June 1969 issue of Science under the title "The Mechanism of Muscular Contraction". According to his theory, filament sliding occurs by cyclic attachment and detachment of myosin on actin filaments. Contraction occurs when the myosin pulls the actin filament towards the centre of the A band, detaches from actin and creates a force (stroke) to bind to the next actin molecule. This idea was subsequently proven in detail, and is more appropriately known as the cross-bridge cycle. References Muscular system Physiology Cell movement
Sliding filament theory
[ "Biology" ]
2,265
[ "Physiology" ]
45,507,060
https://en.wikipedia.org/wiki/Multiscale%20turbulence
Multiscale turbulence is a class of turbulent flows in which the chaotic motion of the fluid is forced at different length and/or time scales. This is usually achieved by immersing in a moving fluid a body with a multiscale, often fractal-like, arrangement of length scales. This arrangement of scales can be either passive or active As turbulent flows contain eddies with a wide range of scales, exciting the turbulence at particular scales (or range of scales) allows one to fine-tune the properties of that flow. Multiscale turbulent flows have been successfully applied in different fields., such as: Reducing acoustic noise from wings by modifying the geometry of spoilers; Enhancing heat transfer from impinging jets passing through grids; Reducing the vortex shedding intensity of flows past normal plates without changing the shedding frequency; Enhancing mixing by energy-efficient stirring; Improving flow metering and flow conditioning in pipes; Improving combustion. Multiscale turbulence has also played an important role into probing the internal structure of turbulence. This sort of turbulence allowed researchers to unveil a novel dissipation law in which the parameter in is not constant, as required by the Richardson-Kolmogorov energy cascade. This new law can be expressed as , with , where and are Reynolds numbers based, respectively, on initial/global conditions (such as free-stream velocity and the object's length scale) and local conditions (such as the rms velocity and integral length scale). This new dissipation law characterises non-equilibrium turbulence apparently universally in various flows (not just multiscale turbulence) and results from non-equilibrium unsteady energy cascade. This imbalance implies that new mean flow scalings exist for free shear turbulent flows, as already observed in axisymmetric wakes References Chaos theory Turbulence Fluid dynamics
Multiscale turbulence
[ "Chemistry", "Engineering" ]
371
[ "Piping", "Chemical engineering", "Turbulence", "Fluid dynamics" ]
33,989,641
https://en.wikipedia.org/wiki/Chaotic%20scattering
Chaotic scattering is a branch of chaos theory dealing with scattering systems displaying a strong sensitivity to initial conditions. In a classical scattering system there will be one or more impact parameters, b, in which a particle is sent into the scatterer. This gives rise to one or more exit parameters, y, as the particle exits towards infinity. While the particle is traversing the system, there may also be a delay time, T—the time it takes for the particle to exit the system—in addition to the distance travelled, s. In certain systems (e.g. "billiard-like" systems in which the particle undergoes lossless collisions with hard, fixed objects) the two will be equivalent—see below. In a chaotic scattering system, a minute change in the impact parameter, may give rise to a very large change in the exit parameters. Gaspard–Rice system An excellent example system is the "Gaspard–Rice" (GR) scattering system —also known simply as the "three-disc" system—which embodies many of the important concepts in chaotic scattering while being simple and easy to understand and simulate. The concept is very simple: we have three hard discs arranged in some triangular formation, a point particle is sent in and undergoes perfect, elastic collisions until it exits towards infinity. In this discussion, we will only consider GR systems having equally sized discs, equally spaced around the points of an equilateral triangle. Figure 1 illustrates this system while Figure 2 shows two example trajectories. Note first that the trajectories bounce around the system for some time before finally exiting. Note also, that if we consider the impact parameters to be the start of the two perfectly horizontal lines at left (the system is completely reversible: the exit point could also be the entry point), the two trajectories are initially so close as to be almost identical. By the time they exit, they are completely different, thus illustrating the strong sensitivity to initial conditions. This system will be used as an example throughout the article. Decay rate If we introduce a large number of particles with uniformly distributed impact parameters, the rate at which they exit the system is known as the decay rate. We can calculate the decay rate by simulating the system over many trials and forming a histogram of the delay time, T. For the GR system, it is easy to see that the delay time and the length of the particle trajectory are equivalent but for a multiplication coefficient. A typical choice for the impact parameter is the y-coordinate, while the trajectory angle is kept constant at zero degrees—horizontal. Meanwhile, we say that the particle has "exited the system" once it passes a border some arbitrary, but sufficiently large, distance from the centre of the system. We expect the number of particles remaining in the system, N(T), to vary as: Thus the decay rate, , is given as: where n is the total number of particles. Figure 3 shows a plot of the path-length versus the number of particles for a simulation of one million (1e6) particles started with random impact parameter, b. A fitted straight line of negative slope, is overlaid. The path-length, s, is equivalent to the decay time, T, provided we scale the (constant) speed appropriately. Note that an exponential decay rate is a property specifically of hyperbolic chaotic scattering. Non-hyperbolic scatterers may have an arithmetic decay rate. An experimental system and the stable manifold Figure 4 shows an experimental realization of the Gaspard–Rice system using a laser instead of a point particle. As anyone who's actually tried this knows, this is not a very effective method of testing the system—the laser beam gets scattered in every direction. As shown by Sweet, Ott and Yorke, a more effective method is to direct coloured light through the gaps between the discs (or in this case, tape coloured strips of paper across pairs of cylinders) and view the reflections through an open gap. The result is a complex pattern of stripes of alternating colour, as shown below, seen more clearly in the simulated version below that. Figures 5 and 6 show the basins of attraction for each impact parameter, b, that is, for a given value of b, through which gap does the particle exit? The basin boundaries form a Cantor set and represent members of the stable manifold: trajectories that, once started, never exit the system. The invariant set and the symbolic dynamics So long as it is symmetric, we can easily think of the system as an iterated function map, a common method of representing a chaotic, dynamical system. Figure 7 shows one possible representation of the variables, with the first variable, , representing the angle around the disc at rebound and the second, , representing the impact/rebound angle relative to the disc. A subset of these two variables, called the invariant set will map onto themselves. This set, four members of which are shown in Figures 8 and 9, will be fractal, totally non-attracting and of measure zero. This is an interesting inversion of the more normally discussed chaotic systems in which the fractal invariant set is attracting and in fact comprises the basin[s] of attraction. Note that the totally non-attracting nature of the invariant set is another property of a hyperbolic chaotic scatterer. Each member of the invariant set can be modelled using symbolic dynamics: the trajectory is labelled based on each of the discs off of which it rebounds. The set of all such sequences form an uncountable set. For the four members shown in Figures 8 and 9, the symbolic dynamics will be as follows: ...121212121212... ...232323232323... ...313131313131... ...123123123123... Members of the stable manifold may be likewise represented, except each sequence will have a starting point. When you consider that a member of the invariant set must "fit" in the boundaries between two basins of attraction, it is apparent that, if perturbed, the trajectory may exit anywhere along the sequence. Thus it should also be apparent that an infinite number of alternating basins of all three "colours" will exist between any given boundary. Because of their unstable nature, it is difficult to access members of the invariant set or the stable manifold directly. The uncertainty exponent is ideally tailored to measure the fractal dimension of this type of system. Once again using the single impact parameter, b, we perform multiple trials with random impact parameters, perturbing them by a minute amount, , and counting how frequently the number of rebounds off the discs changes, that is, the uncertainty fraction. Note that even though the system is two dimensional, a single impact parameter is sufficient to measure the fractal dimension of the stable manifold. This is demonstrated in Figure 10, which shows the basins of attraction plotted as a function of a dual impact parameter, and . The stable manifold, which can be seen in the boundaries between the basins, is fractal along only one dimension. Figure 11 plots the uncertainty fraction, f, as a function of the uncertainty, for a simulated Gaspard–Rice system. The slope of the fitted curve returns the uncertainty exponent, , thus the box-counting dimension of the stable manifold is, . The invariant set is the intersection of the stable and unstable manifolds. Since the system is the same whether run forwards or backwards, the unstable manifold is simply the mirror image of the stable manifold and their fractal dimensions will be equal. On this basis we can calculate the fractal dimension of the invariant set: where D_s and D_u are the fractal dimensions of the stable and unstable manifolds, respectively and N=2 is the dimensionality of the system. The fractal dimension of the invariant set is D=1.24. Relationship between the fractal dimension, decay rate and Lyapunov exponents From the preceding discussion, it should be apparent that the decay rate, the fractal dimension and the Lyapunov exponents are all related. The large Lyapunov exponent, for instance, tells us how fast a trajectory in the invariant set will diverge if perturbed. Similarly, the fractal dimension will give us information about the density of orbits in the invariant set. Thus we can see that both will affect the decay rate as captured in the following conjecture for a two-dimensional scattering system: where D1 is the information dimension and h1 and h2 are the small and large Lyapunov exponents, respectively. For an attractor, and it reduces to the Kaplan–Yorke conjecture. See also Lakes of Wada Uncertainty exponent References External links Software for simulating the Gaspard–Rice system A simulation of the Gaspard-Rice system's and its symbolic dynamics, from Chaos V: Duhem's Bull Chaos theory Scattering
Chaotic scattering
[ "Physics", "Chemistry", "Materials_science" ]
1,842
[ "Nuclear physics", "Scattering", "Condensed matter physics", "Particle physics" ]
33,989,922
https://en.wikipedia.org/wiki/Biomechanics%20of%20sprint%20running
Sprinting involves a quick acceleration phase followed by a velocity maintenance phase. During the initial stage of sprinting, the runners have their upper body tilted forward in order to direct ground reaction forces more horizontally. As they reach their maximum velocity, the torso straightens out into an upright position. The goal of sprinting is to reach and maintain high top speeds to cover a set distance in the shortest possible time. A lot of research has been invested in quantifying the biological factors and mathematics that govern sprinting. In order to achieve these high velocities, it has been found that sprinters have to apply a large amount of force onto the ground to achieve the desired acceleration, rather than taking more rapid steps. Quantifying sprinting mechanics and governing equations Human legs during walking have been mechanically simplified in previous studies to a set of inverted pendulums, while distance running (characterized as a bouncing gait) has modeled the legs as springs. Until recently, it had been long believed that faster sprinting speeds are promoted solely by physiological features that increase stride length and frequency; while these factors do contribute to sprinting velocities, it has also been found that the runner's ability to produce ground forces is also very important. Weyand et al. (2000) came up with the following equation for determining sprint velocity: where is the sprint velocity (m/s), the step frequency (1/s), the average force applied to the ground (N), the body weight (N), and the contact length (m). In short, sprint velocity is reliant on three main factors: step frequency (how many steps you can take per second), average vertical force applied to the ground, and contact length (distance your center of mass translates over the course of one contact period). The formula was tested by having subjects run on a force treadmill (which is a treadmill that contains a force plate to measure ground reaction forces (GRF)). Figure 1 shows approximately what the force plate readout looks like for the duration of three steps. While this equation has proved to be fairly accurate, the study was limited in the sense that data was collected by a force plate that only measured vertical GRF rather than horizontal GRF. This led some people to the false pretense that simply exerting a greater vertical (perpendicular) force to the ground would lead to greater acceleration, which is far from correct (See Morin studies below). In 2005, Hunter et al. conducted a study that determined relationships between sprint velocity and relative impulses in which gait and ground reaction force data was collected and analyzed. It was found that during accelerated runs, a typical support phase is characterized by a breaking phase followed by a propulsive phase (-FH followed by + FH). A common trend in the fastest subjects tested was that there was only a moderate to low amount of vertical force and a large amount of horizontal forces produced. Post study, it was hypothesized by the author that braking forces are necessary to store elastic energy in muscle and tendon tissue. This study loosely confirmed the importance of horizontal as well as vertical GRF during the acceleration phase of sprinting. Unfortunately, since data were collected at the 16-m mark, it was insufficient to draw definite conclusions regarding the entire acceleration phase. Morin et al. (2011) performed a study to investigate the importance of ground reaction forces by having sprinters run on a force treadmill that measured both horizontal and vertical ground reaction forces. Belt velocity was measured for each step and calculations were performed to find the “index of force application technique”, which determines how well subjects are able to apply force in the horizontal direction. The second half of the test involved subjects performing a 100-m sprint on a man-made track using radar to measure the forward speed of runners to create velocity-time curves. The main result of this study showed that the force application technique (rather than simply the total amount of force applied) is the key determinant factor in predicting a sprinter's velocity. This has yet to be integrated into the governing equation of sprinting. Kinetics The kinetics of running describes the motion of a runner using the effects of forces acting on or out of the body. The majority of contributing factors to internal forces comes from leg muscle activation and arm swing. Leg Muscle Activation The muscles responsible for accelerating the runner forward are required to contract with increasing speed to accommodate the increasing velocity of the body. During the acceleration phase of sprinting, the contractile component of muscles is the main component responsible for the power output. Once a steady state velocity has been reached and the sprinter is upright, a sizable fraction of the power comes from the mechanical energy stored in the ‘series elastic elements’ during stretching of the contractile muscles that is released immediately after the positive work phase. As the velocity of the runner increases, inertia and air resistance effects become the limiting factors on the sprinter's top speed. It was previously believed that there was an intramuscular viscous force that increased proportionally to the velocity of muscle contraction that opposed the contractile force; this theory has since been disproved. In a study conducted in year 2004, the gait patterns of distance runners, sprinters, and non-runners was measured using video recording. Each group ran a 60-meter run at 5.81 m/s (to represent distance running) and at maximal running speed. The study showed that non-sprinters ran with an inefficient gait for the maximal speed trial while all groups ran with energetically efficient gaits for the distance trial. This indicates that the development of an economical distance running form is a natural process while sprinting is a learned technique that requires practice. Arm Swing Contrary to the findings of Mann et al. (1981), arm swing plays a vital role in both stabilizing the torso and vertical propulsion. Regarding torso stabilization, arm swing serves to counterbalance the rotational momentum created by leg swing, as suggested by Hinrichs et al. (1987). In short, the athlete would have a hard time controlling the rotation of their trunk without arm swing. The same study also suggested that, as opposed to popular belief, the horizontal force production capabilities of the arms are limited due to the backward swing that follows the forward swing, so the two components cancel each other out. This is not to suggest, however, that arm swing does not contribute to propulsion at all during sprinting; in fact, it can contribute up to 10% of the total vertical propulsive forces that a sprinter can apply to the ground. The reason for this is that, unlike the forward-backward motion, both arms are synchronized in their upward-downward movement. As a result, there is no cancellation of forces. Efficient sprinters have an arm swing that originates from the shoulder and has a flexion and extension action that is of the same magnitude of the flexion and extension occurring at the ipsilateral shoulder and hip. Energetics Di Prampero et al. mathematically quantifies the cost of the acceleration phase (first 30 m) sprint running through experimental testing. The subjects sprinted repeatedly on a track while radar determined their velocity. Additionally, it has been found in previous literature that the energetics of sprinting on flat terrain is analogous to uphill running at a constant speed. The mathematical derivation process is loosely followed below: In the initial phase of sprint running, the total acceleration acting on the body () is the vectoral sum of the forward acceleration and earth's acceleration due to gravity: The “Equivalent slope” (ES) when sprinting on flat ground is: The “Equivalent normalized body mass” (EM) is then found to be: Following the data collection, the cost of sprinting () was found to be: The above equation does not take wind resistance into account, so considering the cost of running against wind resistance (), which is known to be: We combine the two equations to arrive at: Where is the acceleration of the runner's body, the forward acceleration, the acceleration of gravity, a proportionality constant and the velocity. Fatigue effects Fatigue is a prominent factor in sprinting, and it is already widely known that it hinders maximal power output in muscles, but it also affects the acceleration of runners in the ways listed below. Submaximal muscle coordination A study on muscle coordination in which subjects performed repeated 6-second cycling sprints, or intermittent sprints of short duration (ISSD) showed a correlation between decrease in maximal power output and changes in motor coordination. In this case, motor coordination refers to the ability to coordinate muscle movements in order to optimize a physical action, so submaximal coordination indicates that the muscles are no longer activating in sync with one another. The results of the study showed that a delay between the vastus lateralis (VL) and biceps femoris (BF) muscles. Since there was a decrease in power during ISSD occurring in tandem with changes in VL-BF coordination, it is indicated that changes in inter-muscle coordination is one of the contributing factors for the reduced power output resulting from fatigue. This was done using bicycle sprinting, but the principles carry over to sprinting from a runner's perspective. Hindrance of effective force application techniques Morin et al. explored the effects of fatigue on force production and force application techniques in a study where sprinters performed four sets of five 6 second sprints using the same treadmill setup as previously mentioned. Data was collected on their ability to produce ground reaction forces as well as their ability to coordinate the ratio of ground forces (horizontal to vertical) to allow for greater horizontal acceleration. The immediate results showed a significant decrease in performance with each sprint and a sharper decrease in rate of performance depreciation with each subsequent data set. In conclusion, it was obvious that both the total force production capability and technical ability to apply ground forces were greatly affected. Injury Prevention Running gait (biomechanics) is very important for not only efficiency but also for injury prevention. Approximately between 25 and 65% of all runners experience running related injuries each year. Abnormal running mechanics are often cited as the cause of injuries. However, few suggest altering a person's running pattern in order to reduce the risk of injury. Wearable technology companies like I Measure U are creating solutions using biomechanics data to analyse the gait of a runner in real time and provide feedback on how to change the running technique to reduce injury risk. References Biomechanics Sprint (running)
Biomechanics of sprint running
[ "Physics" ]
2,162
[ "Biomechanics", "Mechanics" ]
33,990,395
https://en.wikipedia.org/wiki/DDoS%20attacks%20during%20the%20October%202011%20South%20Korean%20by-election
The DDoS attacks during the October 2011 South Korean by-election were allegedly two separate distributed denial-of-service (DDoS) attacks that occurred on October 26, 2011. The attacks, which took place during the October 2011 Seoul mayoral by-election, targeted the websites of the National Election Commission (NEC) and then-mayoral candidate Park Won-soon. Investigators assert that the attacks were carried out in hopes of suppressing young voters, to the benefit of the Grand National Party. An aide of Grand National Party legislator Choi Gu-sik was found responsible for the attacks. The attacks The attacks consisted of two separate denial-of-service attacks against independent National Election Commission and mayoral candidate Park Won-soon, carried out with the help of a botnet of 200 infected computers. The attacks were conducted during the morning, when citizens--particularly young voters looking to vote before work--would have been expected to look up polling station locations. It has been theorized that the attacks were conducted in the belief that they may reduce voter turnout, to the benefit of the Grand National Party's candidate Na Kyung-won. Police stated that the attack against the NEC lasted about two hours, specifically impacting the part of the website with information on polling locations; Park Won-soon's website was attacked twice that day. The National Police Agency later revealed that an aide to Grand National Part lawmaker Choi Gu-sik, referred to in the media by only their surname "Gong," was responsible for the two attacks. The National Police Agency later arrested Gong and four other associates. Some researchers, however, have questioned the official narrative. Doubts have been raised as to whether Gong had the technical expertise or resources to pull off the attack. Others have pointed out that under a DDoS attack, it would be unusual for parts of a website to be offline while others are online, suggesting perhaps a technical failure instead. These events caused a collective panic amongst GNP members as they often denounce the online activities of South Korean progressives. Political impact The exposure of his role in the attacks led to Choi Gu-sik officially resigning his position as a lawmaker, along with several other members of the GNP. In the wake of the scandal, reformists in the conservative Grand National Party put pressure on core members of the party who were closely affiliated with the Lee Myung-bak government; this in turn led to Park Geun-hye being brought back into the spotlight to reorganize the GNP. Social impact More than 30 university student associations made a joint statement calling for a thorough investigation of the attacks. See also 2008 Grand National Party Convention Bribery Incident Lee Myung-bak government References 2011 in South Korea Presidency of Lee Myung-bak Denial-of-service attacks Liberty Korea Party Cyberwarfare
DDoS attacks during the October 2011 South Korean by-election
[ "Technology" ]
576
[ "Denial-of-service attacks", "Computer security exploits" ]
33,993,737
https://en.wikipedia.org/wiki/Decellularization
Decellularization (also spelled decellularisation in British English) is the process used in biomedical engineering to isolate the extracellular matrix (ECM) of a tissue from its inhabiting cells, leaving an ECM scaffold of the original tissue, which can be used in artificial organ and tissue regeneration. Organ and tissue transplantation treat a variety of medical problems, ranging from end organ failure to cosmetic surgery. One of the greatest limitations to organ transplantation derives from organ rejection caused by antibodies of the transplant recipient reacting to donor antigens on cell surfaces within the donor organ. Because of unfavorable immune responses, transplant patients suffer a lifetime taking immunosuppressing medication. Stephen F. Badylak pioneered the process of decellularization at the McGowan Institute for Regenerative Medicine at the University of Pittsburgh. This process creates a natural biomaterial to act as a scaffold for cell growth, differentiation and tissue development. By recellularizing an ECM scaffold with a patient’s own cells, the adverse immune response is eliminated. Nowadays, commercially available ECM scaffolds are available for a wide variety of tissue engineering. Using peracetic acid to decellularize ECM scaffolds have been found to be false and only disinfects the tissue. With a wide variety of decellularization-inducing treatments available, combinations of physical, chemical, and enzymatic treatments are carefully monitored to ensure that the ECM scaffold maintains the structural and chemical integrity of the original tissue. Scientists can use the acquired ECM scaffold to reproduce a functional organ by introducing progenitor cells, or adult stem cells (ASCs), and allowing them to differentiate within the scaffold to develop into the desired tissue. The produced organ or tissue can be transplanted into a patient. In contrast to cell surface antibodies, the biochemical components of the ECM are conserved between hosts, so the risk of a hostile immune response is minimized. Proper conservation of ECM fibers, growth factors, and other proteins is imperative to the progenitor cells differentiating into the proper adult cells. The success of decellularization varies based on the components and density of the applied tissue and its origin. The applications to the decellularizing method of producing a biomaterial scaffold for tissue regeneration are present in cardiac, dermal, pulmonary, renal, and other types of tissues. Complete organ reconstruction is still in the early levels of development. Process overview Researchers are able to take the tissue from a donor or cadaver, lyse and kill the cells within the tissue without damaging the extracellular components, and finish with a product that is the natural ECM scaffold that has the same physical and biochemical functions of the natural tissue. After acquiring the ECM scaffold, scientists can recellularize the tissue with potent stem or progenitor cells that will differentiate into the original type of tissue. By removing the cells from a donor tissue, the immunogenic antibodies from the donor will be removed. The progenitor cells can be taken from the host, therefore they will not have an adverse response to the tissue. This process of decellularizing tissues and organs is still being developed, but the exact process of taking a tissue from a donor and removing all the cellular components is considered to be the decellularization process. The steps to go from a decellularized ECM scaffold to a functional organ is under the umbrella of recellularization. Because of the diverse applications of tissue in the human body, decellularization techniques have to be tailored to the specific tissue being exercised on. The researched methods of decellularization include physical, chemical, and enzymatic treatments. Though some methods are more commonly used, the exact combination of treatments is variable based on the tissue’s origin and what it is needed for. As far as introducing the different liquidized chemicals and enzymes to an organ or tissue, perfusion and immersion decellularization techniques have been used. Perfusion decellularization is applicable when an extensive vasculature system is present in the organ or tissue. It is crucial for the ECM scaffold to be decellularized at all levels, and evenly throughout the structure. Because of this requirement, vascularized tissues can have chemicals and enzymes perfused through the present arteries, veins, and capillaries. Under this mechanism and proper physiological conditions, treatments can diffuse equally to all of the cells within the organ. The treatments can be removed through the veins at the end of the process. Cardiac and pulmonary decellularization often uses this process of decellularization to introduce the treatments because of their heavily vascularized networks. Immersion decellularization is accomplished through the submersion of a tissue in chemical and enzymatic treatments. This process is more easily accomplished than perfusion, but is limited to thin tissues with a limited vascular system. Physical treatments The most common physical methods used to lyse, kill, and remove cells from the matrix of a tissue through the use of temperature, force and pressure, and electrical disruption. Temperature methods are often used in a rapid freeze-thaw mechanism. By quickly freezing a tissue, microscopic ice crystals form around the plasma membrane and the cell is lysed. After lysing the cells, the tissue can be further exposed to liquidized chemicals that degrade and wash out the undesirable components. Temperature methods conserve the physical structure of the ECM scaffold, but are best handled by thick, strong tissues. Direct force of pressure to a tissue will guarantee disruption of the ECM structure, so pressure is commonly used. Pressure decellularization involves the controlled use of hydrostatic pressure applied to a tissue or organ. This is done best at high temperatures to avoid unmonitored ice crystal formation that could damage the scaffold. Electrical disruption of the plasma membrane is another option to lyse the cells housed in a tissue or organ. By exposing a tissue to electrical pulses, micropores are formed at the plasma membrane. The cells eventually turn to death after their homeostatic electrical balance is ruined through the applied stimulus. This electrical process is documented as Non-thermal irreversible electroporation (NTIRE) and is limited to small tissues and the limited possibilities of inducing an electric current in vivo. Chemical treatments The proper combination of chemicals is selected for decellularization depending on the thickness, extracellular matrix composition, and intended use of the tissue or organ. For example, enzymes would not be used on a collagenous tissue because they disrupt the connective tissue fibers. However, when collagen is not present in a high concentration or needed in the tissue, enzymes can be a viable option for decellularization. The chemicals used to kill and remove the cells include acids, alkaline treatments, ionic detergents, non-ionic detergents, and zwitterionic detergents. The ionic detergent, sodium dodecyl sulfate (SDS), is commonly used because of its high efficacy for lysing cells without significant damage to the ECM. Detergents act effectively to lyse the cell membrane and expose the contents to further degradation. After SDS lyses the cell membrane, endonucleases and exonucleases degrade the genetic contents, while other components of the cell are solubilized and washed out of the matrix. SDS is commonly used even though it has a tendency to slightly disrupt the ECM structure. Alkaline and acid treatments can be effective companions with an SDS treatment due to their ability to degrade nucleic acids and solubilize cytoplasmic inclusions. The most well known non-ionic detergent is Triton X-100, which is popular because of its ability to disrupt lipid-lipid and lipid-protein interactions. Triton X-100 does not disrupt protein-protein interactions, which is beneficial to keeping the ECM intact. EDTA is a chelating agent that binds calcium, which is a necessary component for proteins to interact with one another. By making calcium unavailable, EDTA prevents the integral proteins between cells from binding to one another. EDTA is often used with trypsin, an enzyme that acts as a protease to cleave the already existing bonds between integral proteins of neighboring cells within a tissue. Together, the EDTA-Trypsin combination make a good team for decellularizing tissues. Enzymatic treatments Enzymes used in decellularization treatments are used to break the bonds and interactions between nucleic acids, interacting cells through neighboring proteins, and other cellular components. Lipases, thermolysin, galactosidase, nucleases, and trypsin have all been used in the removal of cells. After a cell is lysed with a detergent, acid, physical pressure, etc., endonucleases and exonucleases can begin the degradation of the genetic material. Endonucleases cleave DNA and RNA in the middle of sequences. Benzoase, an endonuclease, produces multiple small nuclear fragments that can be further degraded and removed from the ECM scaffold. Exonucleases act at the end of DNA sequences to cleave the phosphodiester bonds and further degrade the nucleic acid sequences. Enzymes such as trypsin act as proteases that cleave the interactions between proteins. Although trypsin can have adverse effects of collagen and elastin fibers of the ECM, using it in a time-sensitive manner controls any potential damage it could cause on the extracellular fibers. Dispase is used to prevent undesired aggregation of cells, which is beneficial in promoting their separating from the ECM scaffold. Experimentation has shown dispase to be most effective on the surface of a thin tissue, such as a lung in pulmonary tissue regeneration. To successfully remove deep cells of a tissue with dispase, mechanical agitation is often included in the process. Collagenase is only used when the ECM scaffold product does not require an intact collagen structure. Lipases are commonly used when decellularized skin grafts are needed. Lipase acids function in decellularizing dermal tissues through delipidation and cleaving the interactions between heavily lipidized cells. The enzyme, α-galactosidase is a relevant treatment when removing the Gal epitope antigen from cell surfaces. Applications A natural ECM scaffold provides the necessary physical and biochemical environment to facilitate the growth and specialization of potent progenitor and stem cells. Acellular matrices have been isolated in vitro and in vivo in a number of different tissues and organs. Decellularized ECM can be used to prepare bio-ink for 3D bioprinting. The most applicable success from decellularized tissues has come from symmetrical tissues that have less specialization, such as bone and dermal grafts; however, research and success are ongoing at the organ level. Acellular dermal matrices have been successful in a number of different applications. For example, skin grafts are used in cosmetic surgery and burn care. The decellularized skin graft provides mechanical support to the damaged area while supporting the development of host-derived connective tissue. Cardiac tissue has clinical success in developing human valves from natural ECM matrices. A technique known as the Ross procedure uses an acellular heart valve to replace a defective valve, allowing native cells to repopulate a newly functioning valve. Decellularized allografts have been critical in bone grafts that function in bone reconstruction and replacing of deformed bones in patients. The limits to myocardial tissue engineering come from the ability to immediately perfuse and seed and implemented heart into a patient. Though the ECM scaffold maintains the protein and growth factors of the natural tissue, the molecular level specialization has not yet been harnessed by researchers using decellularized heart scaffolds. Better success at using a whole organ from decellularization techniques has been found in pulmonary research. Scientists have been able to regenerate whole lungs in vitro from rat lungs using perfusion-decellularization. By seeding the matrix with fetal rat lung cells, a functioning lung was produced. The in vitro-produced lung was successfully implemented into a rat, which attests to the possibilities of translating an in vitro produced organ into a patient. Other success for decellularization has been found in small intestinal submucosa (SIS), renal, hepatic, and pancreatic engineering. Because it is a thin material, the SIS matrix can be decellularized through immersing the tissue in chemical and enzymatic treatments. Renal tissue engineering is still developing, but cadaveric kidney matrices have been able to support development of potent fetal kidney cells. Pancreatic engineering is a testament to the molecular specificity of organs. Scientists have not yet been able to produce an entirely functioning pancreas, but they have had success in producing an organ that functions at specific segments. For example, diabetes in rats was shown to decrease by seeding a pancreatic matrix at specific sites. The future applications of decellularized tissue matrix is still being discovered and is considered one of the most hopeful areas in regenerative research. See also Organ transplant Regeneration in humans Regenerative medicine Tissue engineering Transplant rejection References Cell biology Transplantation medicine Tissue engineering Articles containing video clips
Decellularization
[ "Chemistry", "Engineering", "Biology" ]
2,806
[ "Biological engineering", "Cell biology", "Cloning", "Chemical engineering", "Tissue engineering", "Medical technology" ]
33,996,996
https://en.wikipedia.org/wiki/Stefan%20adhesion
Stefan adhesion is the normal stress (force per unit area) acting between two discs when their separation is attempted. Stefan's law governs the flow of a viscous fluid between the solid parallel plates and thus the forces acting when the plates are approximated or separated. The force resulting at distance between two parallel circular disks of radius , immersed in a Newtonian fluid with viscosity , at time , depends on the rate of change of separation : Stefan adhesion is mentioned in conjunction with bioadhesion by mucus-secreting animals. Nevertheless, most such systems violate the assumptions of the equation. In addition, these systems are much more complex when the fluid is non-Newtonian or inertial effects are relevant (high flow rate). References Intermolecular forces
Stefan adhesion
[ "Chemistry", "Materials_science", "Engineering" ]
160
[ "Molecular physics", "Materials science", "Intermolecular forces" ]
33,998,310
https://en.wikipedia.org/wiki/Mountain%20car%20problem
Mountain Car, a standard testing domain in Reinforcement learning, is a problem in which an under-powered car must drive up a steep hill. Since gravity is stronger than the car's engine, even at full throttle, the car cannot simply accelerate up the steep slope. The car is situated in a valley and must learn to leverage potential energy by driving up the opposite hill before the car is able to make it to the goal at the top of the rightmost hill. The domain has been used as a test bed in various reinforcement learning papers. Introduction The mountain car problem, although fairly simple, is commonly applied because it requires a reinforcement learning agent to learn on two continuous variables: position and velocity. For any given state (position and velocity) of the car, the agent is given the possibility of driving left, driving right, or not using the engine at all. In the standard version of the problem, the agent receives a negative reward at every time step when the goal is not reached; the agent has no information about the goal until an initial success. History The mountain car problem appeared first in Andrew Moore's PhD thesis (1990). It was later more strictly defined in Singh and Sutton's reinforcement learning paper with eligibility traces. The problem became more widely studied when Sutton and Barto added it to their book Reinforcement Learning: An Introduction (1998). Throughout the years many versions of the problem have been used, such as those which modify the reward function, termination condition, and the start state. Techniques used to solve mountain car Q-learning and similar techniques for mapping discrete states to discrete actions need to be extended to be able to deal with the continuous state space of the problem. Approaches often fall into one of two categories, state space discretization or function approximation. Discretization In this approach, two continuous state variables are pushed into discrete states by bucketing each continuous variable into multiple discrete states. This approach works with properly tuned parameters but a disadvantage is information gathered from one state is not used to evaluate another state. Tile coding can be used to improve discretization and involves continuous variables mapping into sets of buckets offset from one another. Each step of training has a wider impact on the value function approximation because when the offset grids are summed, the information is diffused. Function approximation Function approximation is another way to solve the mountain car. By choosing a set of basis functions beforehand, or by generating them as the car drives, the agent can approximate the value function at each state. Unlike the step-wise version of the value function created with discretization, function approximation can more cleanly estimate the true smooth function of the mountain car domain. Eligibility traces One aspect of the problem involves the delay of actual reward. The agent is not able to learn about the goal until a successful completion. Given a naive approach for each trial the car can only backup the reward of the goal slightly. This is a problem for naive discretization because each discrete state will only be backed up once, taking a larger number of episodes to learn the problem. This problem can be alleviated via the mechanism of eligibility traces, which will automatically backup the reward given to states before, dramatically increasing the speed of learning. Eligibility traces can be viewed as a bridge from temporal difference learning methods to Monte Carlo methods. Technical details The mountain car problem has undergone many iterations. This section focuses on the standard well-defined version from Sutton (2008). State variables Two-dimensional continuous state space. Actions One-dimensional discrete action space. Reward For every time step: Update function For every time step: Starting condition Optionally, many implementations include randomness in both parameters to show better generalized learning. Termination condition End the simulation when: Variations There are many versions of the mountain car which deviate in different ways from the standard model. Variables that vary include but are not limited to changing the constants (gravity and steepness) of the problem so specific tuning for specific policies become irrelevant and altering the reward function to affect the agent's ability to learn in a different manner. An example is changing the reward to be equal to the distance from the goal, or changing the reward to zero everywhere and one at the goal. Additionally, a 3D mountain car can be used, with a 4D continuous state space. References Implementations C++ Mountain Car Software. Richard s. Sutton. Java Mountain Car with support for RL Glue Python, with good discussion (blog post - down page) Further reading Mountain Car with Replacing Eligibility Traces Gaussian Processes with Mountain Car Machine learning
Mountain car problem
[ "Engineering" ]
920
[ "Artificial intelligence engineering", "Machine learning" ]
33,998,505
https://en.wikipedia.org/wiki/Split%20TEV
The split TEV technique is a molecular method to monitor protein-protein interactions in living cells. It is based on the functional reconstitution of two previously inactive fragments derived from the NIa protease of the tobacco etch virus (TEV protease). These fragments, either an N-terminal (NTEV) or C-terminal part (CTEV), are fused to protein interaction partners of choice. Upon interaction of the two candidate proteins, the NTEV and CTEV fragments get into close proximity, regain proteolytic activity, and activate specific TEV reporters which indicate an occurred protein-protein interaction. References Further reading Protein–protein interaction assays
Split TEV
[ "Chemistry", "Biology" ]
141
[ "Biochemistry methods", "Protein–protein interaction assays" ]
33,999,192
https://en.wikipedia.org/wiki/Polysome%20%28crystallography%29
In crystallography, the term polysome is used to describe overall mineral structures which have structurally and compositionally different framework structures. A general example is amphiboles, in which cutting along the {010} plane yields alternating layers of pyroxene and trioctahedral mica. References Crystallography
Polysome (crystallography)
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
66
[ "Materials science stubs", "Materials science", "Crystallography stubs", "Crystallography", "Condensed matter physics", "Analytical chemistry stubs" ]
52,328,611
https://en.wikipedia.org/wiki/Advanced%20automation%20functions
In automation production technology the actions performed by an automated process are executed by a program of instructions which is run during a work cycle. To execute work cycle programs, an automated system should be available to execute these advanced functions. Safety monitoring If there is a need for workers in an automated system, a safety monitoring is required for the occupational safety and health of the workers. In a safety monitoring various steps can take place including a complete stop of the system, sounding an alarm or reducing the operating speed. Usually, limiting switches are sensors like temperature probes, heat and smoke detectors or pressure sensitive floor pads. Maintenance and repair diagnostics There are three modes of operations which are used in a cycle of maintenance and repair diagnostics: status monitoring, failure diagnostics and recommendation of the repair procedure. In the status monitoring mode, the current system status is displayed. The failure diagnostics mode takes place when a failure occurs. The system will then suggest an adequate repair procedure to a team of experts. Error detection and recovery The error detection mode is a step to determine if and when a failure occurs in automated system. The possible errors can be divided into three categories. random errors, systematic errors and aberrations. While in the error recovery mode, remedy actions take place for all detected errors. References Boucher, T.O., Computer Automation in Manufacturing, Chapman & Hall, London, 1996 &
Advanced automation functions
[ "Engineering" ]
280
[ "Control engineering", "Automation" ]
52,336,768
https://en.wikipedia.org/wiki/Southwell%20plot
The Southwell plot is a graphical method of determining experimentally a structure's critical load, without needing to subject the structure to near-critical loads. The technique can be used for nondestructive testing of any structural elements that may fail by buckling. Critical load Consider a simply supported beam under a compressive load P. The differential equation of equilibrium is , where vo is the initial deflection, and the boundary conditions are Assuming that the deflected shape can be expressed as a Fourier series , Then after substitution into the differential equation, , This relates the deflected shape to the initial imperfections and the applied load. Specifically, at x=L/2, , As P approaches P1, v(L/2) is dominated by V1. Therefore, when PP1, then the fundamental mode will dominate, resulting in Southwell plots v/P against v and obtains P1=Pcritical=Pc from the slope of the predicted straight line graph. This analysis was done for a specific point on a simply supported beam, but the concept can be extended to arbitrary structures. With any problem whose mathematical analog is the same fourth order ordinary differential equation as above, with similar boundary conditions, the first eigenvalue of the associated homogeneous problem can be obtained from the slope of the graph. Therefore, a point of large deflection can be chosen, and it does not need to be the center of a simply supported beam. Applications Strictly speaking, Southwell's Plot is applicable only to structures with a neutral post-buckling path. Initially created for stability problems in column buckling, the Southwell method has also been used to determine critical loads in frame and plate buckling experiments. The method is particularly useful for field tests of structures that are likely to be damaged by applying loads near the critical load and beyond, such as reinforced concrete columns or advanced composite materials. The method can also minimize parasitic effects in experiments and give values that are closer to the theoretically expected values. For example, in a real experiment setup it is impossible to reproduce any theoretical boundary condition perfectly. Additionally, the results of compressive tests can be very sensitive to imperfections and the actual boundary conditions. Therefore, the measured critical load during the experiment can be very different from what is predicted. References Nondestructive testing Mechanics
Southwell plot
[ "Physics", "Materials_science", "Engineering" ]
473
[ "Nondestructive testing", "Materials testing", "Mechanics", "Mechanical engineering" ]
52,337,409
https://en.wikipedia.org/wiki/Microbial%20drug%20delivery
Microbial drug delivery is an emerging form of drug administration characterized by the use of commensal microbes that have been genetically modified to produce medications for chronic diseases in humans. Only proteinaceous drugs can be produced by microbes, as DNA encodes for protein. Research into microbial drug delivery refers to this route of administration as topical, since the microbes release the drug directly to the surface of affected tissues, namely the gastrointestinal (GI) epithelium. Microbial drug delivery is not currently used as a standard route of drug administration due to its experimental nature. During clinical trials, it has been used to treat forms of inflammatory bowel disease (IBD). The most prominently studied vehicles of microbial drug administration are the bacterial species, Lactococcus lactis and Bacteroides ovatus. Medical usage The usage of recombinant microbes (i.e. microorganisms designed to contain DNA from two or more different species) has applications in treating chronic diseases. In 2006, Braat et al. implemented microbial drug delivery with L. lactis in clinical trials, successfully treating Crohn's disease (CD), a form of IBD that causes inflammation and ulceration in the intestines. In this study, a recombinant strain of L. lactis containing complementary DNA (cDNA) for the human interleukin-10 (IL-10) gene was used to treat CD with IL-10, an anti-inflammatory cytokine. Patients consumed capsules containing the microbe to populate the intestines and received therapeutic doses of IL-10 directly from the recombinant bacteria. As this route of administration is experimental, it is currently not available as a standard treatment option. In a 2013 animal study with B. ovatus as the vehicle for microbial drug delivery, researcher Zaed Hamady suggested that recombinant strains of B. ovatus containing transforming growth factor-beta (TGF-β) and keratinocyte growth factor-2 (KGF-2) are ready for clinical trials. Mechanism of drug administration L. lactis mechanism The L. lactis mechanism of microbial drug delivery described in the 2006 study of Braat et al. uses a form of recombinant L. lactis (LL-Thy12) which has replaced the gene, Thy12, with the gene for human IL-10. Removal of Thy12, which encodes for the production of thymidine, causes L. lactis to become dependent on dietary thymidine to maintain live colonies in the gut. The addition of the IL-10 gene allows for the production of human IL-10 to decreases gut inflammation. Secretion of IL-10 from L. lactis in the gut is considered to be a topical administration of the drug to the epithelium, permitting healing in local tissues damaged by inflammation. The administration of IL-10 topically avoids systemic effects, such as immunosuppression in non-target tissues. When using LL-Thy12, IL-10 secretion is dependent on the quantity of live LL-Thy12 in the GI tract. As the presence of dietary thymidine increases the quantity of LL-Thy12, the drug production increases proportionately. Reductions in dietary thymidine kill LL-Thy12, decreasing the production of IL-10. There is a delay of approximately 72-hours between a change in thymidine dosage and the production of IL-10. Due to LL-Thy12’s dependence on thymidine, they will die upon exiting the body through defecation. B. ovatus mechanism B. ovatus has been used in animal studies as a mode of microbial drug delivery due to its xylanase operon. Operons exist in bacteria to control gene expression and are composed of a DNA sequence containing an operator followed by the genes of interest. The operator in the xylanase operon prevents transcription of genes when bound to a repressor protein. The B. ovatus xylanase operon only functions in the presence of the starch, xylan, which removes the repressor and enables production of whichever proteins correlate with the genes located after the operator. For microbial drug delivery, the genes after the operator include those inserted as part of the genetic modification. Xylan is non-digestible to human gastric acid or digestive enzymes, so a predictable quantity of dietary xylan will reach the recombinant B. ovatus in the gut, hypothetically allowing for a precise quantity of drug to be produced by the recombinant B. ovatus. In mice, recombinant B. ovatus strains containing genes for growth factors TGF-β and KGF-2 within the xylanase operon have successfully treated ulcerative colitis (UC). The secreted drugs from B. ovatus are applied topically to the epithelial lining, affecting local tissues rather than acting systemically. Systemic administration of these growth factors could otherwise cause tumors and increased vascularization of tissues. When administered microbially, TGF-β and KGF-2 facilitate tissue repair only in the colon where they are released. Safety of microbial drug delivery Safety is a major factor in the efficacy of microbial drug delivery. Depending on the type of drug being administered, a certain level of control is required for effective and safe treatment of colonic diseases. The L. lactis system has a 72-hour delay between ingesting thymidine and activating IL-10 production, while the B. ovatus system allows the drug to be produced once xylan reaches the bacteria. Regarding IL-10 secretions in L. lactis, the delay is acceptable for treatment of IBD, however any drug that requires a precise dosage and timing may necessitate B. ovatus for controlling drug output. The safety of microbial drug delivery is tied to the microbes’ commensal capability and instance of pathogenesis. A highly pathogenic microbe would not be suitable for medical treatment due to an inherent infection risk. L. lactis is considered by the Food and Drug Administration (FDA) to be generally recognized as safe (GRAS), as it is commonly found in widely consumed dairy products, suggesting its safety in medical treatment. B. ovatus is naturally found in 10% of healthy human colons, demonstrating safety in its compatibility with the human gut microbiota; however, Bacteroides species are known in some cases to cause infections, typically resulting from surgery in the GI tract. Concerns regarding the containment of recombinant microbes in the gut have been addressed through safety mechanisms in both L. lactis and B. ovatus. Containment refers to the inability of microbes to colonize the external environment, where they may have unknown consequences. LL-Thy12 will die upon removal from the body, as they depend on dietary thymidine for survival. B. ovatus is naturally an obligate anaerobe, so any recombinant strain is expected to die in the presence of oxygen once removed from the body. See also Gut flora Oral administration Mucous membrane Absorption (pharmacokinetics) List of human flora List of microorganisms used in food and beverage preparation References Drug delivery devices
Microbial drug delivery
[ "Chemistry" ]
1,534
[ "Pharmacology", "Drug delivery devices" ]
52,338,636
https://en.wikipedia.org/wiki/Fusarium%20acaciae-mearnsii
Fusarium acaciae-mearnsii is a fungus species of the genus Fusarium which produces zearalenone and zearalenol. References Further reading acaciae-mearnsii Fungi described in 2004 Fungus species
Fusarium acaciae-mearnsii
[ "Biology" ]
48
[ "Fungi", "Fungus species" ]
55,228,538
https://en.wikipedia.org/wiki/Co-Hopfian%20group
In the mathematical subject of group theory, a co-Hopfian group is a group that is not isomorphic to any of its proper subgroups. The notion is dual to that of a Hopfian group, named after Heinz Hopf. Formal definition A group G is called co-Hopfian if whenever is an injective group homomorphism then is surjective, that is . Examples and non-examples Every finite group G is co-Hopfian. The infinite cyclic group is not co-Hopfian since is an injective but non-surjective homomorphism. The additive group of real numbers is not co-Hopfian, since is an infinite-dimensional vector space over and therefore, as a group . The additive group of rational numbers and the quotient group are co-Hopfian. The multiplicative group of nonzero rational numbers is not co-Hopfian, since the map is an injective but non-surjective homomorphism. In the same way, the group of positive rational numbers is not co-Hopfian. The multiplicative group of nonzero complex numbers is not co-Hopfian. For every the free abelian group is not co-Hopfian. For every the free group is not co-Hopfian. There exists a finitely generated non-elementary (that is, not virtually cyclic) virtually free group which is co-Hopfian. Thus a subgroup of finite index in a finitely generated co-Hopfian group need not be co-Hopfian, and being co-Hopfian is not a quasi-isometry invariant for finitely generated groups. Baumslag–Solitar groups , where , are not co-Hopfian. If G is the fundamental group of a closed aspherical manifold with nonzero Euler characteristic (or with nonzero simplicial volume or nonzero L2-Betti number), then G is co-Hopfian. If G is the fundamental group of a closed connected oriented irreducible 3-manifold M then G is co-Hopfian if and only if no finite cover of M is a torus bundle over the circle or the product of a circle and a closed surface. If G is an irreducible lattice in a real semi-simple Lie group and G is not a virtually free group then G is co-Hopfian. E.g. this fact applies to the group for . If G is a one-ended torsion-free word-hyperbolic group then G is co-Hopfian, by a result of Sela. If G is the fundamental group of a complete finite volume smooth Riemannian n-manifold (where n > 2) of pinched negative curvature then G is co-Hopfian. The mapping class group of a closed hyperbolic surface is co-Hopfian. The group Out(Fn) (where n>2) is co-Hopfian. Delzant and Polyagailo gave a characterization of co-Hopficity for geometrically finite Kleinian groups of isometries of without 2-torsion. A right-angled Artin group (where is a finite nonempty graph) is not co-Hopfian; sending every standard generator of to a power defines and endomorphism of which is injective but not surjective. A finitely generated torsion-free nilpotent group G may be either co-Hopfian or not co-Hopfian, depending on the properties of its associated rational Lie algebra. If G is a relatively hyperbolic group and is an injective but non-surjective endomorphism of G then either is parabolic for some k >1 or G splits over a virtually cyclic or a parabolic subgroup. Grigorchuk group G of intermediate growth is not co-Hopfian. Thompson group F is not co-Hopfian. There exists a finitely generated group G which is not co-Hopfian but has Kazhdan's property (T). If G is Higman's universal finitely presented group then G is not co-Hopfian, and G cannot be embedded in a finitely generated recursively presented co-Hopfian group. Generalizations and related notions A group G is called finitely co-Hopfian if whenever is an injective endomorphism whose image has finite index in G then . For example, for the free group is not co-Hopfian but it is finitely co-Hopfian. A finitely generated group G is called scale-invariant if there exists a nested sequence of subgroups of finite index of G, each isomorphic to G, and whose intersection is a finite group. A group G is called dis-cohopfian if there exists an injective endomorphism such that . In coarse geometry, a metric space X is called quasi-isometrically co-Hopf if every quasi-isometric embedding is coarsely surjective (that is, is a quasi-isometry). Similarly, X is called coarsely co-Hopf if every coarse embedding is coarsely surjective. In metric geometry, a metric space K is called quasisymmetrically co-Hopf if every quasisymmetric embedding is onto. See also Hopfian object References Further reading K. Varadarajan, Hopfian and co-Hopfian Objects, Publicacions Matemàtiques 36 (1992), no. 1, pp. 293–317 Group theory
Co-Hopfian group
[ "Mathematics" ]
1,180
[ "Group theory", "Fields of abstract algebra" ]
56,660,684
https://en.wikipedia.org/wiki/Dittert%20conjecture
The Dittert conjecture, or Dittert–Hajek conjecture, is a mathematical hypothesis in combinatorics concerning the maximum achieved by a particular function of matrices with real, nonnegative entries satisfying a summation condition. The conjecture is due to Eric Dittert and (independently) Bruce Hajek. Let be a square matrix of order with nonnegative entries and with . Its permanent is defined as where the sum extends over all elements of the symmetric group. The Dittert conjecture asserts that the function defined by is (uniquely) maximized when , where is defined to be the square matrix of order with all entries equal to 1. References Conjectures Combinatorics Inequalities
Dittert conjecture
[ "Mathematics" ]
145
[ "Discrete mathematics", "Unsolved problems in mathematics", "Binary relations", "Combinatorics", "Conjectures", "Mathematical relations", "Inequalities (mathematics)", "Combinatorics stubs", "Mathematical problems", "Mathematical theorems" ]
56,661,772
https://en.wikipedia.org/wiki/Automorphism%20group
In mathematics, the automorphism group of an object X is the group consisting of automorphisms of X under composition of morphisms. For example, if X is a finite-dimensional vector space, then the automorphism group of X is the group of invertible linear transformations from X to itself (the general linear group of X). If instead X is a group, then its automorphism group is the group consisting of all group automorphisms of X. Especially in geometric contexts, an automorphism group is also called a symmetry group. A subgroup of an automorphism group is sometimes called a transformation group. Automorphism groups are studied in a general way in the field of category theory. Examples If X is a set with no additional structure, then any bijection from X to itself is an automorphism, and hence the automorphism group of X in this case is precisely the symmetric group of X. If the set X has additional structure, then it may be the case that not all bijections on the set preserve this structure, in which case the automorphism group will be a subgroup of the symmetric group on X. Some examples of this include the following: The automorphism group of a field extension is the group consisting of field automorphisms of L that fix K. If the field extension is Galois, the automorphism group is called the Galois group of the field extension. The automorphism group of the projective n-space over a field k is the projective linear group The automorphism group of a finite cyclic group of order n is isomorphic to , the multiplicative group of integers modulo n, with the isomorphism given by . In particular, is an abelian group. The automorphism group of a finite-dimensional real Lie algebra has the structure of a (real) Lie group (in fact, it is even a linear algebraic group: see below). If G is a Lie group with Lie algebra , then the automorphism group of G has a structure of a Lie group induced from that on the automorphism group of . If G is a group acting on a set X, the action amounts to a group homomorphism from G to the automorphism group of X and conversely. Indeed, each left G-action on a set X determines , and, conversely, each homomorphism defines an action by . This extends to the case when the set X has more structure than just a set. For example, if X is a vector space, then a group action of G on X is a group representation of the group G, representing G as a group of linear transformations (automorphisms) of X; these representations are the main object of study in the field of representation theory. Here are some other facts about automorphism groups: Let be two finite sets of the same cardinality and the set of all bijections . Then , which is a symmetric group (see above), acts on from the left freely and transitively; that is to say, is a torsor for (cf. #In category theory). Let P be a finitely generated projective module over a ring R. Then there is an embedding , unique up to inner automorphisms. In category theory Automorphism groups appear very naturally in category theory. If X is an object in a category, then the automorphism group of X is the group consisting of all the invertible morphisms from X to itself. It is the unit group of the endomorphism monoid of X. (For some examples, see PROP.) If are objects in some category, then the set of all is a left -torsor. In practical terms, this says that a different choice of a base point of differs unambiguously by an element of , or that each choice of a base point is precisely a choice of a trivialization of the torsor. If and are objects in categories and , and if is a functor mapping to , then induces a group homomorphism , as it maps invertible morphisms to invertible morphisms. In particular, if G is a group viewed as a category with a single object * or, more generally, if G is a groupoid, then each functor , C a category, is called an action or a representation of G on the object , or the objects . Those objects are then said to be -objects (as they are acted by ); cf. -object. If is a module category like the category of finite-dimensional vector spaces, then -objects are also called -modules. Automorphism group functor Let be a finite-dimensional vector space over a field k that is equipped with some algebraic structure (that is, M is a finite-dimensional algebra over k). It can be, for example, an associative algebra or a Lie algebra. Now, consider k-linear maps that preserve the algebraic structure: they form a vector subspace of . The unit group of is the automorphism group . When a basis on M is chosen, is the space of square matrices and is the zero set of some polynomial equations, and the invertibility is again described by polynomials. Hence, is a linear algebraic group over k. Now base extensions applied to the above discussion determines a functor: namely, for each commutative ring R over k, consider the R-linear maps preserving the algebraic structure: denote it by . Then the unit group of the matrix ring over R is the automorphism group and is a group functor: a functor from the category of commutative rings over k to the category of groups. Even better, it is represented by a scheme (since the automorphism groups are defined by polynomials): this scheme is called the automorphism group scheme and is denoted by . In general, however, an automorphism group functor may not be represented by a scheme. See also Outer automorphism group Level structure, a technique to remove an automorphism group Holonomy group Notes Citations References External links https://mathoverflow.net/questions/55042/automorphism-group-of-a-scheme Group automorphisms
Automorphism group
[ "Mathematics" ]
1,255
[ "Mathematical objects", "Functions and mappings", "Mathematical relations", "Group automorphisms" ]
56,666,301
https://en.wikipedia.org/wiki/Internal%20Security%20Assessor
Internal Security Assessor (ISA) is a designation given by the PCI Security Standards Council to eligible internal security audit professionals working for a qualifying organization. The intent of this qualification is for these individuals to receive PCI DSS training so that their qualifying organization has a better understanding of PCI DSS and how it impacts their company. Becoming an ISA can improve the relationship with Qualified Security Assessors and support the consistent and proper application of PCI DSS measures and controls within the organization. The PCI SSC's public website can be used to verify ISA employees. An ISA is also able to perform self-assessments for their organization as long as they are not a Level 1 merchant ISA training is only available for merchants and processors. Organizations are required to have an internal audit department and cannot be affiliated with a Qualified Security Assessor or Automated Scanning Vendor (ASV) company in any way. Certificate Renewal The ISA certification must be renewed annually. The ISA certification is company-specific. If the certified individual leaves the company that sponsored them, the certification is no longer valid References External links PCI Security Standards Council Computer security organizations Information privacy all Standards
Internal Security Assessor
[ "Engineering" ]
232
[ "Cybersecurity engineering", "Information privacy" ]
56,666,801
https://en.wikipedia.org/wiki/Acetozone
Acetozone is an organic peroxide that is a strong oxidant. In the early 20th century, it found use as a surgical antiseptic and for the treatment of typhoid fever. It has also been used as a bleaching agent for flour. References Antiseptics Organic peroxides
Acetozone
[ "Chemistry" ]
66
[ "Organic compounds", "Organic peroxides" ]
38,017,102
https://en.wikipedia.org/wiki/Order%20unit
An order unit is an element of an ordered vector space which can be used to bound all elements from above. In this way (as seen in the first example below) the order unit generalizes the unit element in the reals. According to H. H. Schaefer, "most of the ordered vector spaces occurring in analysis do not have order units." Definition For the ordering cone in the vector space , the element is an order unit (more precisely a -order unit) if for every there exists a such that (that is, ). Equivalent definition The order units of an ordering cone are those elements in the algebraic interior of that is, given by Examples Let be the real numbers and then the unit element is an . Let and then the unit element is an . Each interior point of the positive cone of an ordered topological vector space is an order unit. Properties Each order unit of an ordered TVS is interior to the positive cone for the order topology. If is a preordered vector space over the reals with order unit then the map is a sublinear functional. Order unit norm Suppose is an ordered vector space over the reals with order unit whose order is Archimedean and let Then the Minkowski functional of defined by is a norm called the . It satisfies and the closed unit ball determined by is equal to that is, References Bibliography Mathematical analysis Topology
Order unit
[ "Physics", "Mathematics" ]
283
[ "Mathematical analysis", "Topology", "Space", "Geometry", "Spacetime" ]
38,017,421
https://en.wikipedia.org/wiki/Bounding%20point
In functional analysis, a branch of mathematics, a bounding point of a subset of a vector space is a conceptual extension of the boundary of a set. Definition Let be a subset of a vector space . Then is a bounding point for if it is neither an internal point for nor its complement. References Mathematical analysis Topology
Bounding point
[ "Physics", "Mathematics" ]
66
[ "Mathematical analysis", "Mathematical analysis stubs", "Topology", "Space", "Geometry", "Spacetime" ]
38,017,461
https://en.wikipedia.org/wiki/Extension%20Poly%28A%29%20Test
The extension Poly(A) Test (ePAT) describes a method to determine the poly(A) tail lengths of mRNA molecules. It was developed and described by A. Jänicke et al. in 2012. The method consists of three separate steps: in the first step, the poly-adenylated RNA is hybridised to a DNA oligonucleotide featuring a poly-deoxythymidine sequence at its 5’ end. Klenow polymerase then catalyses elongation of the mRNA’s 3’ end, using the DNA oligonucleotide as a template. This reaction takes place at 25 °C. In the second step, reverse transcriptase synthesis extends the DNA oligonucleotides that have annealed to the mRNA’s extended 3’ end. In order to ensure that DNA oligomers hybridised to internal poly(A) sequences do not serve as primers for reverse transcription, the second step is carried out at 55 °C. A third and final step involves amplification of the newly synthesised cDNA via PCR. This PCR requires one gene-specific and one universal primer. Analysis of the amplicons’ lengths allows for estimation of the sequence flanked by the two primers, i.e. the poly(A) tail length of the sample mRNA. According to the authors, measurement of poly(A) tail lengths and their distribution amongst different transcripts, this method can be used to determine the cell’s translation state instead of the more tedious analysis of protein translation states. References Molecular biology techniques
Extension Poly(A) Test
[ "Chemistry", "Biology" ]
327
[ "Molecular biology techniques", "Molecular biology" ]
39,462,950
https://en.wikipedia.org/wiki/Superradiant%20phase%20transition
In quantum optics, a superradiant phase transition is a phase transition that occurs in a collection of fluorescent emitters (such as atoms), between a state containing few electromagnetic excitations (as in the electromagnetic vacuum) and a superradiant state with many electromagnetic excitations trapped inside the emitters. The superradiant state is made thermodynamically favorable by having strong, coherent interactions between the emitters. The superradiant phase transition was originally predicted by the Dicke model of superradiance, which assumes that atoms have only two energetic levels and that these interact with only one mode of the electromagnetic field. The phase transition occurs when the strength of the interaction between the atoms and the field is greater than the energy of the non-interacting part of the system. (This is similar to the case of superconductivity in ferromagnetism, which leads to the dynamic interaction between ferromagnetic atoms and the spontaneous ordering of excitations below the critical temperature.) The collective Lamb shift, relating to the system of atoms interacting with the vacuum fluctuations, becomes comparable to the energies of atoms alone, and the vacuum fluctuations cause the spontaneous self-excitation of matter. The transition can be readily understood by the use of the Holstein-Primakoff transformation applied to a two-level atom. As a result of this transformation, the atoms become Lorentz harmonic oscillators with frequencies equal to the difference between the energy levels. The whole system then simplifies to a system of interacting harmonic oscillators of atoms, and the field known as Hopfield dielectric which further predicts in the normal state polarons for photons or polaritons. If the interaction with the field is so strong that the system collapses in the harmonic approximation and complex polariton frequencies (soft modes) appear, then the physical system with nonlinear terms of the higher order becomes the system with the Mexican hat-like potential, and will undergo ferroelectric-like phase transition. In this model, the system is mathematically equivalent for one mode of excitation to the Trojan wave packet, when the circularly polarized field intensity corresponds to the electromagnetic coupling constant. Above the critical value, it changes to the unstable motion of the ionization. The superradiant phase transition was the subject of a wide discussion as to whether or not it is only a result of the simplified model of the matter-field interaction; and if it can occur for the real physical parameters of physical systems (a no-go theorem). However, both the original derivation and the later corrections leading to nonexistence of the transition – due to Thomas–Reiche–Kuhn sum rule canceling for the harmonic oscillator the needed inequality to impossible negativity of the interaction – were based on the assumption that the quantum field operators are commuting numbers, and the atoms do not interact with the static Coulomb forces. This is generally not true like in case of Bohr–van Leeuwen theorem and the classical non-existence of Landau diamagnetism. The negating results were also the consequence of using the simple Quantum Optics models of the electromagnetic field-matter interaction but not the more realistic Condenced Matter models like for example the superconductivity model of the BCS but with the phonons replaced by photons to first obtain the collective polaritons. The return of the transition basically occurs because the inter-atom dipole-dipole or generally the electron-electron Coulomb interactions are never negligible in the condensed and even more in the superradiant matter density regime and the Power-Zienau unitary transformation eliminating the quantum vector potential in the minimum-coupling Hamiltonian transforms the Hamiltonian exactly to the form used when it was discovered and without the square of the vector potential which was later claimed to prevent it. Alternatively within the full quantum mechanics including the electromagnetic field the generalized Bohr–van Leeuwen theorem does not work and the electromagnetic interactions cannot be eliminated while they only change the vector potential coupling to the electric field coupling and alter the effective electrostatic interactions. It can be observed in model systems like Bose–Einstein condensates and artificial atoms. Theory Criticality of linearized Jaynes-Cummings model A superradiant phase transition is formally predicted by the critical behavior of the resonant Jaynes-Cummings model, describing the interaction of only one atom with one mode of the electromagnetic field. Starting from the exact Hamiltonian of the Jaynes-Cummings model at resonance Applying the Holstein-Primakoff transformation for two spin levels, replacing the spin raising and lowering operators by those for the harmonic oscillators one gets the Hamiltonian of two coupled harmonic-oscillators: which readily can be diagonalized. Postulating its normal form where one gets the eigenvalue equation with the solutions The system collapses when one of the frequencies becomes imaginary, i.e. when or when the atom-field coupling is stronger than the frequency of the mode and atom oscillators. While there are physically higher terms in the true system, the system in this regime will therefore undergo the phase transition. Criticality of Jaynes-Cummings model The simplified Hamiltonian of the Jaynes-Cummings model, neglecting the counter-rotating terms, is and the energies for the case of zero detuning are where is the Rabi frequency. One can approximately calculate the canonical partition function , where the discrete sum was replaced by the integral. The normal approach is that the latter integral is calculated by the Gaussian approximation around the maximum of the exponent: This leads to the critical equation This has the solution only if which means that the normal, and the superradiant phase, exist only if the field-atom coupling is significantly stronger than the energy difference between the atom levels. When the condition is fulfilled, the equation gives the solution for the order parameter depending on the inverse of the temperature , which means non-vanishing ordered field mode. Similar considerations can be done in true thermodynamic limit of the infinite number of atoms. Instability of the classical electrostatic model The better insight on the nature of the superradiant phase transition as well on the physical value of the critical parameter which must be exceeded in order for the transition to occur may be obtained by studying the classical stability of the system of the charged classical harmonic oscillators in the 3D space interacting only with the electrostatic repulsive forces for example between electrons in the locally harmonic oscillator potential. Despite the original model of the superradiance the quantum electromagnetic field is totally neglected here. The oscillators may be assumed to be placed for example on the cubic lattice with the lattice constant in the analogy to the crystal system of the condensed matter. The worse scenario of the defect of the absence of the two out-of-the-plane motion-stabilizing electrons from the 6-th nearest neighbors of a chosen electron is assumed while the four nearest electrons are first assumed to be rigid in space and producing the anti-harmonic potential in the direction perpendicular to the plane of the all five electrons. The condition of the instability of motion of the chosen electron is that the net potential being the superposition of the harmonic oscillator potential and the quadratically expanded Coulomb potential from the four electrons is negative i.e. or Making it artificially quantum by multiplying the numerator and the denominator of the fraction by the one obtains the condition where is the square of the dipole transition strength between the ground state and the first excited state of the quantum harmonic oscillator, is the energy gap between consecutive levels and it is also noticed that is the spatial density of the oscillators. The condition is almost identical to this obtained in the original discovery of the superradiant phase transition when replacing the harmonic oscillators with two level atoms with the same distance between the energy levels, dipole transition strength, and the density which means that it occurs in the regime when the Coulomb interactions between electrons dominate over locally harmonic oscillatory influence of the atoms. It that sense the free electron gas with is also purely superradiant. The critical inequality rewritten yet differently expresses the fact that superradiant phase transition occurs when the frequency of the binding atomic oscillators is lower than so called electron gas plasma frequency. References External links Old Warsaw School of "No-Go" of the Superradiant Phase Transition, the talk by the 2022 Wigner Medal recipient Iwo Bialynicki-Birula's former PhD student K. Rzążewski Quantum mechanics Phase transitions
Superradiant phase transition
[ "Physics", "Chemistry" ]
1,782
[ "Physical phenomena", "Phase transitions", "Theoretical physics", "Phases of matter", "Quantum mechanics", "Critical phenomena", "Statistical mechanics", "Matter" ]
39,464,628
https://en.wikipedia.org/wiki/Evolution%20from%20Francis%20turbine%20to%20Kaplan%20turbine
Francis turbine converts energy at high pressure heads which are not easily available and hence a turbine was required to convert the energy at low pressure heads, given that the quantity of water was large enough. It was easy to convert high pressure heads to power easily but difficult to do so for low pressure heads. Therefore, an evolution took place that converted the Francis turbine to Kaplan turbine, which generated power at even low pressure heads efficiently. Changes Turbines are sometimes differentiated on the basis of the type of inlet flow, whether the inlet velocity is in axial direction, radial direction or a combination of both. The Francis turbine is a mixed hydraulic turbine (the inlet velocity has Radial and tangential components) while the Kaplan turbine is an axial hydraulic turbine (the inlet velocity has only axial velocity component). The evolution consisted of the change in the inlet flow mainly. Nomenclature of a velocity triangle: A general velocity triangle consists of the following vectors: V : Absolute velocity of the fluid. U : Tangential velocity of the fluid. Vr: Relative velocity of the fluid after contact with rotor. Vw: Tangential component of V (absolute velocity), called whirl velocity. Vf: Flow velocity (the axial component in the case of axial machines, the radial in the case of radial). α: Angle made by V with the plane of the machine (usually the nozzle angle or the guide blade angle). β: Angle of the rotor blade or angle made by relative velocity with the tangential direction. Generally, the Kaplan turbine works on low pressure heads (H) and high flow rates (Q). This implies that the specific speed (Ns) on which a Kaplan turbine functions is high, as specific speed (Nsp) is directly proportional to flow (Q) and inversely proportional to head (H). On the other hand, the Francis turbine works on low specific speeds i.e., high pressure heads. In the figure, it can be seen that the increase in specific speed (or decrease in head) have following consequences: A reduction in inlet velocity V1 . The flow velocity Vf1 at inlet increases, and hence allows a large amount of fluid to enter the turbine. Vw component decreases as moving to the Kaplan turbine, and here in the figure, Vf represents the axial (Va) component. The flow at the inlet, in the figure, to all the runners, except the Kaplan impeller, is in the radial (Vf) and tangential (Vw) directions. β1 decreases as the evolution proceeds. However, the exit velocity is axial in Kaplan runner, while it is the radial one in all other runners. Hence, these are the parameter changes that have to be incorporated in converting a Francis turbine to a Kaplan turbine. General differences between Francis and Kaplan turbines The efficiency of a Kaplan turbine is higher than that of a Francis turbine. A Kaplan turbine has a smaller cross-section and has lower rotational speed than a Francis turbine. In a Kaplan turbine, the water flows in axially and out axially, while in a Francis turbine it flows in radially and out axially. A Kaplan turbine has fewer runner blades than a Francis turbine because a Kaplan turbine's blades are twisted and cover a larger circumference. Friction losses in a Kaplan turbine are less. The shaft of a Francis turbine is usually vertical (in many of the early machines it was horizontal), whereas in a Kaplan turbine it is always vertical. A Francis turbine's specific speed is medium (60–300 RPM); a Kaplan turbine's specific speed is high (300–1000 RPM). See also Francis turbine Kaplan turbine Velocity triangle Turbine Three-dimensional losses and correlation in turbomachinery Notes References Water turbines Turbomachinery Hydraulic engineering
Evolution from Francis turbine to Kaplan turbine
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
763
[ "Hydrology", "Turbomachinery", "Chemical equipment", "Physical systems", "Hydraulics", "Civil engineering", "Mechanical engineering", "Hydraulic engineering" ]
39,466,042
https://en.wikipedia.org/wiki/Norman%20Bekkedahl
Norman Bekkedahl (1903–1986) was Deputy Chief of the Polymers Division at the Institute for Materials Research of the National Bureau of Standards. Bekkedahl received the 1967 Charles Goodyear Medal for his work with the application of thermodynamics to natural rubber, in particular the application of dilatometry to studying glass transition and crystallization of rubber. In 1995, he was inducted into the International Rubber Science Hall of Fame. Bekkedahl made one of the first investigations of the glass transition of rubber and wrote more than 40 technical articles on rubber. He studied chemical engineering at the University of Minnesota. He continued his studies at George Washington University and received his Ph.D. from the American University in Washington, DC. He worked at the American Sugar Beet Company, the U.S. Department of Agriculture, and the National Bureau of Standards (Polymers Division). References Polymer scientists and engineers 1903 births 1986 deaths
Norman Bekkedahl
[ "Chemistry", "Materials_science" ]
191
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
39,467,280
https://en.wikipedia.org/wiki/Howard%20Burton
Howard Burton is a filmmaker, an author and the creator of Ideas Roadshow, a multimedia initiative producing documentary films, books and podcasts. Ideas Roadshow was the recipient of the Educational Learning Resources Award at the London Book Fair's International Excellence Awards in 2018. Burton holds an M.A. in philosophy and a Ph.D. in theoretical physics, and was the founding executive director of Perimeter Institute for Theoretical Physics in Waterloo, Canada, from 1999–2007. He received a Distinguished Alumni Award from the University of Waterloo in 2007. His book First Principles: Building Perimeter Institute tells the history of the founding years of Perimeter Institute. References Living people Theoretical physicists Year of birth missing (living people)
Howard Burton
[ "Physics" ]
143
[ "Theoretical physics", "Theoretical physicists" ]
39,471,791
https://en.wikipedia.org/wiki/Rock%20mass%20plasticity
In geotechnical engineering, rock mass plasticity is the study of the response of rocks to loads beyond the elastic limit. Historically, conventional wisdom has it that rock is brittle and fails by fracture, while plasticity (irreversible deformation without fracture) is identified with ductile materials such as metals. In field-scale rock masses, structural discontinuities exist in the rock indicating that failure has taken place. Since the rock has not fallen apart, contrary to expectation of brittle behavior, clearly elasticity theory is not the last word. Theoretically, the concept of rock plasticity is based on soil plasticity which is different from metal plasticity. In metal plasticity, for example in steel, the size of a dislocation is sub-grain size while for soil it is the relative movement of microscopic grains. The theory of soil plasticity was developed in the 1960s at Rice University to provide for inelastic effects not observed in metals. Typical behaviors observed in rocks include strain softening, perfect plasticity, and work hardening. Application of continuum theory is possible in jointed rocks because of the continuity of tractions across joints even through displacements may be discontinuous. The difference between an aggregate with joints and a continuous solid is in the type of constitutive law and the values of constitutive parameters. Experimental evidence Experiments are usually carried out with the intention of characterizing the mechanical behavior of rock in terms of rock strength. The strength is the limit to elastic behavior and delineates the regions where plasticity theory is applicable. Laboratory tests for characterizing rock plasticity fall into four overlapping categories: confining pressure tests, pore pressure or effective stress tests, temperature-dependent tests, and strain rate-dependent tests. Plastic behavior has been observed in rocks using all these techniques since the early 1900s. The Boudinage experiments show that localized plasticity is observed in certain rock specimens that have failed in shear. Other examples of rock displaying plasticity can be seen in the work of Cheatham and Gnirk. Test using compression and tension show necking of rock specimens while tests using wedge penetration show lip formation. The tests carried out by Robertson show plasticity occurring at high confining pressures. Similar results are observable in the experimental work carried out by Handin and Hager, Paterson, and Mogi. From these results it appears that the transition from elastic to plastic behavior may also indicate the transition from softening to hardening. More evidence is presented by Robinson and Schwartz. It is observed that the higher the confining pressure, the greater the ductility observed. However, the strain to rupture remains roughly the same at around 1. The effect of temperature on rock plasticity has been explored by several teams of researchers. It is observed that the peak stress decreases with temperature. Extension tests (with confining pressure greater than the compressive stress) show that the intermediate principal stress as well as the strain rate has an effect on the strength. The experiments on the effect of strain rate by Serdengecti and Boozer show that increasing the strain rate makes rock stronger but also makes it appear more brittle. Thus dynamic loading may actually cause the strength of the rock to increase substantially. Increase in temperature appears to increase the rate effect in the plastic behavior of rocks. After these early explorations in the plastic behavior of rocks, a significant amount of research has been carried out on the subject, primarily by the petroleum industry. From the accumulated evidence, it is clear that rock does exhibit remarkable plasticity under certain conditions and the application of a plasticity theory to rock is appropriate. Governing equations The equations that govern the deformation of jointed rocks are the same as those used to describe the motion of a continuum: where is the mass density, is the material time derivative of , is the particle velocity, is the particle displacement, is the material time derivative of , is the Cauchy stress tensor, is the body force density, is the internal energy per unit mass, is the material time derivative of , is the heat flux vector, is an energy source per unit mass, is the location of the point in the deformed configuration, and t is the time. In addition to the balance equations, initial conditions, boundary conditions, and constitutive models are needed for a problem to be well-posed. For bodies with internal discontinuities such as jointed rock, the balance of linear momentum is more conveniently expressed in the integral form, also called the principle of virtual work: where represents the volume of the body and is its surface (including any internal discontinuities), is an admissible variation that satisfies the displacement (or velocity) boundary conditions, the divergence theorem has been used to eliminate derivatives of the stress tensor, and are surface tractions on the surfaces . The jump conditions across stationary internal stress discontinuities require that the tractions across these surfaces be continuous, i.e., where are the stresses in the sub-bodies , and is the normal to the surface of discontinuity. Constitutive relations For small strains, the kinematic quantity that is used to describe rock mechanics is the small strain tensor If temperature effects are ignored, four types of constitutive relations are typically used to describe small strain deformations of rocks. These relations encompass elastic, plastic, viscoelastic, and viscoplastic behavior and have the following forms: Elastic material: or . For an isotropic, linear elastic, material this relation takes the form or . The quantities are the Lamé parameters. Viscous fluid: For isotropic materials, or where is the shear viscosity and is the bulk viscosity. Nonlinear material: Isotropic nonlinear material relations take the form or . This type of relation is typically used to fit experimental data and may include inelastic behavior. Quasi-linear materials: Constitutive relations for these materials are typically expressed in rate form, e.g., or . A failure criterion or yield surface for the rock may then be expressed in the general form Typical constitutive relations for rocks assume that the deformation process is isothermal, the material is isotropic, quasi-linear, and homogenous and material properties do not depend upon position at the start of the deformation process, that there is no viscous effect and therefore no intrinsic time scale, that the failure criterion is rate-independent, and that there is no size effect. However, these assumptions are made only to simplify analysis and should be abandoned if necessary for a particular problem. Yield surfaces for rocks Design of mining and civil structures in rock typically involves a failure criterion that is cohesive-frictional. The failure criterion is used to determine whether a state of stress in the rock will lead to inelastic behavior, including brittle failure. For rocks under high hydrostatic stresses, brittle failure is preceded by plastic deformation and the failure criterion is used to determine the onset of plastic deformation. Typically, perfect plasticity is assumed beyond the yield point. However strain hardening and softening relations with nonlocal inelasticity and damage have also been used. Failure criteria and yield surfaces are also often augmented with a cap to avoid unphysical situations where extreme hydrostatic stress states do not lead to failure or plastic deformation. Two widely used yield surfaces/failure criteria for rocks are the Mohr-Coulomb model and the Drucker-Prager model. The Hoek–Brown failure criterion is also used, notwithstanding the serious consistency problem with the model. The defining feature of these models is that tensile failure is predicted at low stresses. On the other hand, as the stress state becomes increasingly compressive, failure and yield requires higher and higher values of stress. Plasticity theory The governing equations, constitutive models, and yield surfaces discussed above are not sufficient if we are to compute the stresses and displacements in a rock body that is undergoing plastic deformation. An additional kinematic assumption is needed, i.e., that the strain in the body can be decomposed additively (or multiplicatively in some cases) into an elastic part and a plastic part. The elastic part of the strain can be computed from a linear elastic constitutive model. However, determination of the plastic part of the strain requires a flow rule and a hardening model. Typical flow plasticity theories (for small deformation perfect plasticity or hardening plasticity) are developed on the basis on the following requirements: The rock has a linear elastic range. The rock has an elastic limit defined as the stress at which plastic deformation first takes place, i.e., . Beyond the elastic limit the stress state always remains on the yield surface, i.e., . Loading is defined as the situation under which increments of stress are greater than zero, i.e., . If loading takes the stress state to the plastic domain then the increment of plastic strain is always greater than zero, i.e., . Unloading is defined as the situation under which increments of stress are less than zero, i.e., . The material is elastic during unloading and no additional plastic strain is accumulated. The total strain is a linear combination of the elastic and plastic parts, i.e., . The plastic part cannot be recovered while the elastic part is fully recoverable. The work done of a loading-unloading cycle is positive or zero, i.e., . This is also called the Drucker stability postulate and eliminates the possibility of strain softening behavior. Three-dimensional plasticity The above requirements can be expressed in three dimensions as follows. Elasticity (Hooke's law). In the linear elastic regime the stresses and strains in the rock are related by where the stiffness matrix is constant. Elastic limit (Yield surface). The elastic limit is defined by a yield surface that does not depend on the plastic strain and has the form Beyond the elastic limit. For strain hardening rocks, the yield surface evolves with increasing plastic strain and the elastic limit changes. The evolving yield surface has the form Loading. It is not straightforward to translate the condition geology to three dimensions, particularly for rock plasticity which is dependent not only on the deviatoric stress but also on the mean stress. However, during loading and it is assumed that the direction of plastic strain is identical to the normal to the yield surface () and that , i.e., The above equation, when it is equal to zero, indicates a state of neutral loading where the stress state moves along the yield surface without changing the plastic strain. Unloading: A similar argument is made for unloading for which situation , the material is in the elastic domain, and Strain decomposition: The additive decomposition of the strain into elastic and plastic parts can be written as Stability postulate: The stability postulate is expressed as Flow rule In metal plasticity, the assumption that the plastic strain increment and deviatoric stress tensor have the same principal directions is encapsulated in a relation called the flow rule. Rock plasticity theories also use a similar concept except that the requirement of pressure-dependence of the yield surface requires a relaxation of the above assumption. Instead, it is typically assumed that the plastic strain increment and the normal to the pressure-dependent yield surface have the same direction, i.e., where is a hardening parameter. This form of the flow rule is called an associated flow rule and the assumption of co-directionality is called the normality condition. The function is also called a plastic potential. The above flow rule is easily justified for perfectly plastic deformations for which when , i.e., the yield surface remains constant under increasing plastic deformation. This implies that the increment of elastic strain is also zero, , because of Hooke's law. Therefore, Hence, both the normal to the yield surface and the plastic strain tensor are perpendicular to the stress tensor and must have the same direction. For a work hardening material, the yield surface can expand with increasing stress. We assume Drucker's second stability postulate which states that for an infinitesimal stress cycle this plastic work is positive, i.e., The above quantity is equal to zero for purely elastic cycles. Examination of the work done over a cycle of plastic loading-unloading can be used to justify the validity of the associated flow rule. Consistency condition The Prager consistency condition is needed to close the set of constitutive equations and to eliminate the unknown parameter from the system of equations. The consistency condition states that at yield because , and hence Notes References External links Microstructures and deformation mechanisms Continuum mechanics Plasticity (physics) Rock mechanics
Rock mass plasticity
[ "Physics", "Materials_science" ]
2,606
[ "Deformation (mechanics)", "Classical mechanics", "Plasticity (physics)", "Continuum mechanics" ]
48,524,515
https://en.wikipedia.org/wiki/Strictly-Correlated-Electrons%20density%20functional%20theory
The Strictly-Correlated-Electrons (SCE) density functional theory (SCE DFT) approach, originally proposed by Michael Seidl, is a formulation of density functional theory, alternative to the widely used Kohn-Sham DFT, especially aimed at the study of strongly-correlated systems. The essential difference between the two approaches is the choice of the auxiliary system (having the same density as the real, physical one). In Kohn-Sham DFT this system is composed by non-interacting electrons, for which the kinetic energy can be calculated exactly and the interaction term has to be approximated. In SCE DFT, instead, the starting point is totally the opposite one: the auxiliary system has infinite electronic correlation and zero kinetic energy. The Strictly-Correlated-Electron reference system To understand how the SCE system is constructed, it is useful to first think in terms of a simple example. Consider a collection of identical classical charges (with repulsive Coulomb interaction) confined in some container with a given shape. If let alone, the charges will distribute themselves within the container until they reach the spatial configuration that minimizes their interaction energy (in equilibrium, their kinetic energy is zero). Of course, the equilibrium position of the charges will depend on the shape of the container. Suppose now that in this classical system one of the charges, which we can label as number “1”, is pinned at some arbitrary position inside the container. Clearly, the equilibrium position of the other charges will now not only depend on the shape of the container, but also on the position of the pinned charge. Thus, for a given confining geometry, one can write the position of the -th particle , , as a function of : . In the SCE system, as in the classical example described above, the position of a reference electron determines the position of the remaining ones. The analogue role of the confining container is now played by the condition that the density at each point must be the same as that of the real system, : the electrons will always try to be as far apart from each other as possible, in order to minimize their repulsion, but always restricted by this condition. The positions are called co-motion functions and play a fundamental role in the SCE formalism, analogue to the Kohn-Sham single-particle orbitals in Kohn-Sham DFT. Calculation of the co-motion functions and interaction energy of the SCE system For a given density , the probability of finding one electron at a certain position is the same as that of finding the -th electron at , or, equivalently, . The co-motion functions can be obtained from the integration of this equation. An analytical solution exists for 1D systems, but not for the general case. The interaction energy of the SCE system for a given density can be exactly calculated in terms of the co-motion functions as . Notice that this is analogous to the Kohn-Sham approach, where the non-interacting kinetic energy is expressed in terms of the Kohn-Sham single-particle orbitals. A very important property of the SCE system is the following one: since the position of one particle determines the position of the remaining ones, the total coulomb repulsion felt by a particle at a point becomes a function of only itself. This force can then be written as minus the gradient of some one-particle potential : . At the same time, it can be shown that the potential satisfies the relation . A promising route towards the application of the SCE approach to systems with general symmetry is the mass-transportation-theory reformulation of the approach. This is based on the analogies between the SCE problem and the dual Kantorovich problem. The SCE wave function is also very useful to set rigorous bounds for the constant appearing in the Lieb-Oxford inequality. Combining the strictly-correlated-electron and the Kohn-Sham approaches The one-body potential can be used to approximate the Hartree-exchange-correlation (Hxc) potential of the Kohn-Sham DFT approach. Indeed, one can see the analogy between the expression relating the functional derivative of and and the well-known one of Kohn-Sham DFT , which relates the Hartree-exchange-correlation (Hxc) functional and the corresponding potential. The approximation (which becomes exact in the limit of infinitely strong interaction) corresponds to writing the Hohenberg-Kohn functional as , where is the non-interacting kinetic energy. One has therefore and this leads to the Kohn-Sham equations , which can be solved self-consistently. Since the potential is constructed from the exact properties of the SCE system, it is able to capture the effects of the strongly-correlated regime, as it has been recently shown in the first applications of this "KS-SCE DFT" approach to simple model systems. In particular, the method has allowed to observe Wigner localization in strongly-correlated electronic systems without introducing any artificial symmetry breaking. Other related density functional methods in the strongly correlated system The fractional quantum Hall effect(FQHE) is a strongly correlated system of general interest in the field of condensed matter. Previous DFT applications maps the FQHE to a reference system of non-interacting electrons, but fail to capture many interesting features of FQHE. The progress has been recently made to map the FQHE instead to a reference system of non-interacting composite fermions, which are emergent particles in FQHE. When a non-local exchange-correlation is incorporated to take care of the long-range gauge interaction between composite fermions, this DFT method successfully captures not only configurations with nonuniform densities but also topological properties such as fractional charge and fractional braid statistics for the quasiparticles excitations. This is a non-trivial example of how the DFT method can be applied to a strongly correlated FQHE system and provide numerical result comparable to those exact-diagonalization results. It opens a new line to attack the problem of FQHE through the popular DFT method. References Density functional theory
Strictly-Correlated-Electrons density functional theory
[ "Physics", "Chemistry" ]
1,252
[ "Density functional theory", "Quantum chemistry", "Quantum mechanics" ]
48,532,100
https://en.wikipedia.org/wiki/Laser%20schlieren%20deflectometry
Laser schlieren deflectometry (LSD) is a method for a high-speed measurement of the gas temperature in microscopic dimensions, in particular for temperature peaks under dynamic conditions at atmospheric pressure. The principle of LSD is derived from schlieren photography: a narrow laser beam is used to scan an area in a gas where changes in properties are associated with characteristic changes of refractive index. Laser schlieren deflectometry is claimed to overcome limitations of other methods regarding temporal and spatial resolution. The theory of the method is analogous to the scattering experiment of Ernest Rutherford from 1911. However, instead of alpha particles scattered by gold atoms, here an optical ray is deflected by hot spots with unknown temperature. A general equation of LSD describes the dependence of the measured maximum deflection of the ray δ1 on the local maximum of the neutral gas temperature in the hot spot T1: where T0 is ambient temperature and δ0 is a calibration constant depending on the configuration of the experiment. Laser schlieren deflectometry has been used for investigation of the temperature dynamics, heat transfer and energy balance in a miniaturized kind of atmospheric-pressure plasma. See also Moire deflectometry Schlieren Schlieren photography Shadowgraph References Plasma diagnostics Thermodynamics Fluid dynamics
Laser schlieren deflectometry
[ "Physics", "Chemistry", "Mathematics", "Technology", "Engineering" ]
281
[ "Plasma physics", "Dynamical systems", "Chemical engineering", "Measuring instruments", "Plasma diagnostics", "Thermodynamics", "Piping", "Fluid dynamics" ]
57,109,223
https://en.wikipedia.org/wiki/Bjerknes%20force
Bjerknes forces are translational forces on bubbles in a sound wave. The phenomenon is a type of acoustic radiation force. Primary Bjerknes forces are caused by an external sound field; secondary Bjerknes forces are attractive or repulsive forces between pairs of bubbles in the same sound field caused by the pressure field generated by each bubble volume's oscillations. They were first described by Vilhelm Bjerknes in his 1906 Fields of Force. Hydrodynamics – electromagnetism analogy In Fields of Force Bjerknes lay out geometrical and dynamical analogies between the Maxwell's theory of electromagnetism and hydrodynamics. In the light of these analogies the Bjerknes forces are being predicted. Principle of kinematic buoyancy Bjerknes writes:"Any body which participates in the translatory motion of a fluid mass is subject to a kinematic buoyancy equal to the product of the acceleration of the translatory motion multiplied by the mass of the water displaced by the body"This principle is analogous to Archimedes' principle. Based on this principle the force acting on a particle of volume is . Where is the fluid velocity and is the fluid density. Using conservation of momentum for incompressible non-viscous fluid one can find that to first order: , Concluding that . Charge and oscillating particles Bjerknes realized that the velocity field generated by an expanding particle in an incompressible fluid has the same geometrical structure as the electric field generated by a positively charged particle, and that the same applies for contracting particle and a negatively charged particle. In the case of an oscillating motion, Bjerknes argued that two particles that oscillate in phase generate a velocity field that is geometrically equivalent to the electric field generated by two particles with the same charge, whereas two particles that oscillate in an opposite phase will generate a velocity field that is geometrically equivalent to the electric field generated by particles with an opposite sign. Bjerknes then writes:"Between Bodies pulsating in the same phase there is an apparent attraction; between bodies pulsating in the opposite phase there is an apparent repulsion, the force being proportional to the product of the two intensities of pulsating, and proportional to the inverse square of the distance." This result is counter to our intuition, as it demonstrates that bodies oscillating in phase exert an attractive force on each other, despite creating a field akin to that of identically charged particles. This result was described by Bjerknes as "Astonishing". Primary Bjerknes force The force on a small particle in a sound wave is given by: where V is the volume of the particle, and P is the acoustic pressure gradient on the bubble. Assuming a sinusoidal standing wave, the time-averaged pressure gradient over a single acoustic cycle is zero, meaning a solid particle (with fixed volume) experiences no net force. However, because a bubble is compressible, the oscillating pressure field also causes its volume to change; for spherical bubbles this can be described by the Rayleigh–Plesset equation. This means the time-averaged product of the bubble volume and the pressure gradient can be non-zero over an acoustic cycle. Unlike acoustic radiation forces on incompressible particles, net forces can be generated in the absence of attenuation or reflection of the sound wave. The sign of the force will depend on the relative phase between the pressure field and the volume oscillations. According to the theory of forced harmonic oscillator the relative phase will depend on the difference between the bubble resonant frequency and the acoustic driving frequency. Bubble focusing From Rayleigh–Plesset equation one can derive the bubble resonant frequency: Where is the fluid density, is the rest radius of the bubble, is the polytropic index, is the ambient pressure, is the vapor pressure and is the surface tension constant. Bubbles with resonance frequency above the acoustic driving frequency travel up the pressure gradient, while those with a lower resonance frequency travel down the pressure gradient. The dependence of the resonant frequency () on the rest radius of the bubble predicts that for standing waves, there is a critical radius that depends on the driving frequency. Small bubbles () accumulate at pressure antinodes, whereas large bubbles () accumulate at pressure nodes. References Acoustics Fluid dynamics External links
Bjerknes force
[ "Physics", "Chemistry", "Engineering" ]
924
[ "Chemical engineering", "Classical mechanics", "Acoustics", "Piping", "Fluid dynamics" ]
57,109,910
https://en.wikipedia.org/wiki/Natural%20resources%20engineering
Natural Resources Engineering, the sixth Abet accredited environmental engineering program in the United States, is a subset of environmental engineering that applies various branches of science in order to create new technology that aims to protect, maintain, and establish sustainable natural resources. Specifically, natural resources engineers are concerned with applying engineering concepts and solutions to prevalent environmental issues. Common natural resources this discipline of engineering works closely with include both living resources such as plants and animals as well as non-living resources such as renewable energy, land, soils, and water. Natural resource engineering also involves researching and evaluating natural and societal forces. The hydrological cycle is the main component of natural forces and the desires of other people attribute to societal forces. Some historical examples of applications of natural resources engineering include the Roman aqueducts and the Hoover Dam. Natural resource engineering degrees require a basic understanding of core engineering classes including calculus, physics, chemistry, and engineering mechanics, as well as additional courses with a stronger focus on applications of natural resources in environmental systems. These specific courses include soil and water engineering, modeling of biological and physical systems, properties of biological materials, and systems optimization. The overall purpose of natural resource engineering is mainly categorized as either resource development, environmental management or both. Natural resource engineers often work in a vast variety of environments ranging from urban to rural. Most natural resource engineers can be found working for groups who strive to solve current and future environmental issues such as environmental consulting firms and government agencies. History Natural resources engineering has always existed as an extension of biological engineering, but demand for such practices continue to increase along with increasing urbanization. The development of basic farming techniques, irrigation, and basic wells were a significant step in natural resources engineering for the Human race. Important historical examples of natural resources engineering include the Roman aqueducts and the Hoover Dam. Natural resource engineering is of vital importance in developing regions to address issues such as access to clean drinking water as well as sanitation and sustainable food production. In 1981 Environmental Resource Engineering became the 6th Abet Accredited environmental engineering program in the U.S. Natural resources engineering will be an important factor in how the natural environment will respond to rising pressure on environmental and agricultural resources. Concepts and areas of research and development The discipline of Natural Resource engineering specifically concentrates on natural resources. Natural resources are "industrial materials and capacities (such as mineral deposits and water power) supplied by nature" and sometimes legally are classified by their ability to be used by humans to meet their demands. Natural resources can be both living and non-living natural elements and include fossil fuels, plants, animals, minerals, sediment, and bodies of water. Areas of research and development in natural resources engineering concerning the hydro-logical cycle include: erosion control, flood control, water quality renovation and management, irrigation, drainage, bio-remediation, air quality, watershed-stream assessment, and ecological engineering. This discipline of engineering also involves investigating different natural and societal forces on the environment. The main natural force researched by natural resources engineers is the hydro-logical cycle. This cycle is concerned with how water transitions through the environment through the processes of evaporation, condensation, precipitation, and transpiration. This cycle is a concern when looking at prevalent environmental issues on the earth, and therefore is a major concern for natural resource engineers. The main societal force that concerns natural resource engineers is the exploitation of natural resources by humans. This force concerns natural resource engineers because it threatens to deplete or harm many sources of natural resources. With this concentration on natural resources and natural and societal impact, natural resource engineers are constantly searching for ways to apply engineering concepts to create developments that aim to protect, maintain, and establish sustainable sources of natural resources. Some current areas of research and developments include: finding ways to maximize the utilization of natural resources in fuel with minimum waste, developing infrastructure and equipment with the intent to provide protection for the overall environment and sources of natural resources, finding solutions to current environmental issues that directly impacted sources of natural resources such as soil erosion, sediment loss, flooding, and pollution, seeking efficient ways to manage natural resources so they will not be depleted, and finding ways to conserve and allocate resources efficiently as the population increases dramatically. Courses To obtain a degree in natural resource engineering, a solid engineering background is required, as well as specific technical knowledge specific to natural resources and their role in our environment. Most degree programs within this specific discipline are partnered within larger disciplines of engineering such as environmental engineering, biological engineering, or agricultural engineering. Standard engineering courses Mathematics (Calculus, differential equations, statistics) Physics Chemistry Engineering Mechanics (Statics, Dynamics, Solids Mechanics) Fluid Mechanics Thermodynamics Natural resources engineering specific course topics Materials, instrumentation, and measurement classes specific to biological systems Systems optimization Modeling and management of biological and physical systems Soil, water, conservation, and nutrient management engineering Careers With a degree in natural resources engineering, there are various different industries that one could pursue a career in. Some of these industries include federal, state, and local government agencies(such as the Natural Resource Conservation Service), environmental consulting firms, agricultural and food processing industries, and various other industries and companies that focus on solving environmental issues. In the government sector, natural resource engineers usually find themselves working on projects that work to manage government owned and operated natural resources and help solve environmental issues that impact these resources. Within an environmental consulting firm, a natural resource engineer may find themselves running calculations and making predictions about different ways to utilize natural resources to maximize their efficiency. Within different processing industries, natural resource engineers may find themselves working on waste management efficiency and natural resource processing design. Currently, the demand for natural resources engineers is greater than the supply of graduates and ranges locally to globally. Specific careers in natural resources engineering Biomass EngineerEnvironmental ScientistHydrology EngineerMarine ScientistSoil Scientist Ag-Aqua Engineer Agricultural Engineer Chemist Biochemist Genetic Engineer References External links Natural Resource Engineering Option. (n.d.). Retrieved March 31, 2018, from http://abe.psu.edu/majors/be/requirements/nre Environmental Resources Engineering. (n.d.). Retrieved March 31, 2018, from http://engineering.humboldt.edu/academics/history Environmental engineering
Natural resources engineering
[ "Chemistry", "Engineering" ]
1,256
[ "Chemical engineering", "Civil engineering", "Environmental engineering" ]
57,111,857
https://en.wikipedia.org/wiki/Adisa%20Azapagic
Adisa Azapagić (born 10 April 1961) is a Bosnian chemical engineer and academic. She has served as Professor of Sustainable Chemical Engineering at the University of Manchester since 2006. Early life and education Azapagic was born in 1961 in Tuzla, Bosnia and Herzegovina. She attended the University of Tuzla, and graduated in 1984 with a bachelor's degree in chemical engineering. She completed her doctoral studies at the University of Surrey, and earned her PhD on Environmental System Analysis using Life-cycle assessmentin 1996. Research and career Azapagic remained at the University of Surrey for thirteen years before moving to the University of Manchester. She leads the Sustainable Industrial Systems research group at the University of Manchester. She runs several industry collaborations, including projects with Procter & Gamble, Kraft Foods, Whirlpool Corporation. In 2015 she won the University of Manchester award for Outstanding Benefit to Society. Azapagic developed software to calculate carbon footprint at the University of Manchester (CCaLC). Her research interests lie in engineering for sustainable development, which includes sustainable technology, life cycle assessment and carbon footprinting. In 2018 she demonstrated that the UK's chocolate industry generates the same amount of greenhouse gases as Malta. Azapagic is the founding editor-in-chief of Elsevier's Sustainable Production and Consumption. She has written three books, looking at sustainable development and polymers. Awards and honours Azapagic was elected a Fellow of the Royal Academy of Engineering (FREng) in 2013. She was appointed Member of the Order of the British Empire (MBE) in the 2020 New Year Honours for services to sustainability and carbon footprinting. She was awarded an honorary doctoral degree from Gheorghe Asachi Technical University of Iași. She is part of the all-party manufacturing group. She is a member of the American Institute of Chemical Engineers. In 2010 she was awarded the Institution of Chemical Engineers prize for Outstanding Achievements in Chemical and Process Engineering. She won the GlaxoSmithKline Innovation prize in 2011. References British chemical engineers Women chemical engineers Fellows of the Institution of Chemical Engineers Female fellows of the Royal Academy of Engineering Fellows of the Royal Academy of Engineering Fellows of the Royal Society of Chemistry Alumni of the University of Surrey 1961 births Living people Members of the Order of the British Empire Bosnia and Herzegovina emigrants to the United Kingdom Naturalised citizens of the United Kingdom People from Tuzla
Adisa Azapagic
[ "Chemistry" ]
486
[ "Women chemical engineers", "Chemical engineers" ]
57,116,113
https://en.wikipedia.org/wiki/Nicolas%20Moussiopoulos
Nicolas Moussiopoulos (in Greek Νικόλαος Μουσιόπουλος, born January 1, 1956, in Athens) is a Greek engineer and university professor at the Aristotle University of Thessaloniki. His research interests are in the field of Environmental Engineering. He received the Gerhard Hess Award of the German Research Association, the Heinrich Hertz Award (1990) and Aristotle University's Excellence Prize (2008). Biography Moussiopoulos studied mechanical engineering at the Universität Karlsruhe (now Karlsruhe Institute of Technology, KIT), from 1973 to 1978 and, prior to graduating, had a research stay at the Von Karman Institute for Fluid Dynamics in Sint-Genesius-Rode, Belgium. After completing his doctoral studies in 1982 with a focus on transport phenomena, he started lecturing at the Universität Karlsruhe, where he led a research group that developed mathematical model systems to describe air pollutant dispersion and transformation. From 1986 to 1987 he also worked as a lecturer at the Gesamthochschule of Kassel. After the completion of his postdoctoral lecture qualification (“Habilitation”), he was appointed Full Professor at the School of Mechanical Engineering of the Aristotle University of Thessaloniki. Since 1990 he is also the head of this university's Sustainability Engineering Laboratory, formerly Laboratory of Heat Transfer and Environmental Engineering. In addition, since 1996 he is an honorary professor at KIT's School of Mechanical Engineering. In the periods 1997-1999 and 2003–2007, he chaired Aristotle University's School of Mechanical Engineering. From September 2006 until August 2010 he was the dean of the university's Faculty of Engineering. From October 2010 until March 2016 he served as the vice president of the International Hellenic University and dean of its School of Economics & Business Administration (until 2013). In the periods 2014-2017 and 2019-2021 he was also the head of the Energy Department, School of Mechanical Engineering of the Aristotle University Thessaloniki. Moussiopoulos has consulted several Greek ministers, and represented Greece in numerous international committees. Since 2002 he is a member of the German National Academy of Sciences Leopoldina. In the same year he was awarded the Order of Merit of the Federal Republic of Germany. In 2012 the Royal Society appointed him associate editor of Philosophical Transactions A. In the period 2015-2018, Moussiopoulos was the General Secretary of the Hellenic Chapter of the Club of Rome. From June 2018 until April 2021 he was an elected member of the Scientific Council of the Hellenic Foundation for Research and Innovation, responsible for Engineering and Technology Sciences. August 2019 he was appointed consultant to the German Federal Ministry for Economic Cooperation and Development on waste management issues in Greece. Since October 2021 he is elected vice president of the Hellenic Solid Waste Management Association, responsible for international relations. References 1956 births Academic staff of the Aristotle University of Thessaloniki Engineers from Athens Living people Recipients of the Cross of the Order of Merit of the Federal Republic of Germany Environmental engineers
Nicolas Moussiopoulos
[ "Chemistry", "Engineering" ]
601
[ "Environmental engineers", "Environmental engineering" ]
57,116,400
https://en.wikipedia.org/wiki/Pyragas%20method
In the mathematics of chaotic dynamical systems, in the Pyragas method of stabilizing a periodic orbit, an appropriate continuous controlling signal is injected into the system, whose intensity is nearly zero as the system evolves close to the desired periodic orbit but increases when it drifts away from the desired orbit. Both the Pyragas and OGY (Ott, Grebogi and Yorke) methods are part of a general class of methods called "closed loop" or "feedback" methods which can be applied based on knowledge of the system obtained through solely observing the behavior of the system as a whole over a suitable period of time. The method was proposed by Lithuanian physicist . References External links Kęstutis Pyragas homepage Chaos theory Nonlinear systems
Pyragas method
[ "Mathematics" ]
156
[ "Nonlinear systems", "Dynamical systems" ]
57,117,782
https://en.wikipedia.org/wiki/Spot-tag
A Spot-tag is a 12-amino acid peptide tag recognized by a single-domain antibody (sdAb, or nanobody). Due to the small size of a Spot-tag (12 amino acids) and the robust Spot-nanobody (14.7 kD) that specifically binds to Spot-tagged proteins, Spot-tag can be used for multiple capture and detection applications: Immunoprecipitation, affinity purification, immunofluorescence, and super-resolution microscopy. Recombinant proteins can be engineered to express the Spot-tag. Spot-tag Sequence Amino acid sequence PDRVRAVSHWSS Codon optimized DNA sequence See also Protein tag References Amino acids Peptides Proteins
Spot-tag
[ "Chemistry" ]
150
[ "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Amino acids", "Molecular biology", "Proteins", "Peptides" ]
57,121,312
https://en.wikipedia.org/wiki/Quantum%20foundations
Quantum foundations is a discipline of science that seeks to understand the most counter-intuitive aspects of quantum theory, reformulate it and even propose new generalizations thereof. Contrary to other physical theories, such as general relativity, the defining axioms of quantum theory are quite ad hoc, with no obvious physical intuition. While they lead to the right experimental predictions, they do not come with a mental picture of the world where they fit. There exist different approaches to resolve this conceptual gap: First, one can put quantum physics in contraposition with classical physics: by identifying scenarios, such as Bell experiments, where quantum theory radically deviates from classical predictions, one hopes to gain physical insights on the structure of quantum physics. Second, one can attempt to find a re-derivation of the quantum formalism in terms of operational axioms. Third, one can search for a full correspondence between the mathematical elements of the quantum framework and physical phenomena: any such correspondence is called an interpretation. Fourth, one can renounce quantum theory altogether and propose a different model of the world. Research in quantum foundations is structured along these roads. Non-classical features of quantum theory Quantum nonlocality Two or more separate parties conducting measurements over a quantum state can observe correlations which cannot be explained with any local hidden variable theory. Whether this should be regarded as proving that the physical world itself is "nonlocal" is a topic of debate, but the terminology of "quantum nonlocality" is commonplace. Nonlocality research efforts in quantum foundations focus on determining the exact limits that classical or quantum physics enforces on the correlations observed in a Bell experiment or more complex causal scenarios. This research program has so far provided a generalization of Bell's theorem that allows falsifying all classical theories with a superluminal, yet finite, hidden influence. Quantum contextuality Nonlocality can be understood as an instance of quantum contextuality. A situation is contextual when the value of an observable depends on the context in which it is measured (namely, on which other observables are being measured as well). The original definition of measurement contextuality can be extended to state preparations and even general physical transformations. Epistemic models for the quantum wave-function A physical property is epistemic when it represents our knowledge or beliefs on the value of a second, more fundamental feature. The probability of an event to occur is an example of an epistemic property. In contrast, a non-epistemic or ontic variable captures the notion of a “real” property of the system under consideration. There is an on-going debate on whether the wave-function represents the epistemic state of a yet to be discovered ontic variable or, on the contrary, it is a fundamental entity. Under some physical assumptions, the Pusey–Barrett–Rudolph (PBR) theorem demonstrates the inconsistency of quantum states as epistemic states, in the sense above. Note that, in QBism and Copenhagen-type views, quantum states are still regarded as epistemic, not with respect to some ontic variable, but to one's expectations about future experimental outcomes. The PBR theorem does not exclude such epistemic views on quantum states. Axiomatic reconstructions Some of the counter-intuitive aspects of quantum theory, as well as the difficulty to extend it, follow from the fact that its defining axioms lack a physical motivation. An active area of research in quantum foundations is therefore to find alternative formulations of quantum theory which rely on physically compelling principles. Those efforts come in two flavors, depending on the desired level of description of the theory: the so-called Generalized Probabilistic Theories approach and the Black boxes approach. The framework of generalized probabilistic theories Generalized Probabilistic Theories (GPTs) are a general framework to describe the operational features of arbitrary physical theories. Essentially, they provide a statistical description of any experiment combining state preparations, transformations and measurements. The framework of GPTs can accommodate classical and quantum physics, as well as hypothetical non-quantum physical theories which nonetheless possess quantum theory's most remarkable features, such as entanglement or teleportation. Notably, a small set of physically motivated axioms is enough to single out the GPT representation of quantum theory. L. Hardy introduced the concept of GPT in 2001, in an attempt to re-derive quantum theory from basic physical principles. Although Hardy's work was very influential (see the follow-ups below), one of his axioms was regarded as unsatisfactory: it stipulated that, of all the physical theories compatible with the rest of the axioms, one should choose the simplest one. The work of Dakic and Brukner eliminated this “axiom of simplicity” and provided a reconstruction of quantum theory based on three physical principles. This was followed by the more rigorous reconstruction of Masanes and Müller. Axioms common to these three reconstructions are: The subspace axiom: systems which can store the same amount of information are physically equivalent. Local tomography: to characterize the state of a composite system it is enough to conduct measurements at each part. Reversibility: for any two extremal states [i.e., states which are not statistical mixtures of other states], there exists a reversible physical transformation that maps one into the other. An alternative GPT reconstruction proposed by Chiribella et al. around the same time is also based on the Purification axiom: for any state of a physical system A there exists a bipartite physical system and an extremal state (or purification) such that is the restriction of to system . In addition, any two such purifications of can be mapped into one another via a reversible physical transformation on system . The use of purification to characterize quantum theory has been criticized on the grounds that it also applies in the Spekkens toy model. To the success of the GPT approach, it can be countered that all such works just recover finite dimensional quantum theory. In addition, none of the previous axioms can be experimentally falsified unless the measurement apparatuses are assumed to be tomographically complete. Categorical quantum mechanics or process theories Categorical Quantum Mechanics (CQM) or Process Theories are a general framework to describe physical theories, with an emphasis on processes and their compositions. It was pioneered by Samson Abramsky and Bob Coecke. Besides its influence in quantum foundations, most notably the use of a diagrammatic formalism, CQM also plays an important role in quantum technologies, most notably in the form of ZX-calculus. It also has been used to model theories outside of physics, for example the DisCoCat compositional natural language meaning model. The framework of black boxes In the black box or device-independent framework, an experiment is regarded as a black box where the experimentalist introduces an input (the type of experiment) and obtains an output (the outcome of the experiment). Experiments conducted by two or more parties in separate labs are hence described by their statistical correlations alone. From Bell's theorem, we know that classical and quantum physics predict different sets of allowed correlations. It is expected, therefore, that far-from-quantum physical theories should predict correlations beyond the quantum set. In fact, there exist instances of theoretical non-quantum correlations which, a priori, do not seem physically implausible. The aim of device-independent reconstructions is to show that all such supra-quantum examples are precluded by a reasonable physical principle. The physical principles proposed so far include no-signalling, Non-Trivial Communication Complexity, No-Advantage for Nonlocal computation, Information Causality, Macroscopic Locality, and Local Orthogonality. All these principles limit the set of possible correlations in non-trivial ways. Moreover, they are all device-independent: this means that they can be falsified under the assumption that we can decide if two or more events are space-like separated. The drawback of the device-independent approach is that, even when taken together, all the afore-mentioned physical principles do not suffice to single out the set of quantum correlations. In other words: all such reconstructions are partial. Interpretations of quantum theory An interpretation of quantum theory is a correspondence between the elements of its mathematical formalism and physical phenomena. For instance, in the pilot wave theory, the quantum wave function is interpreted as a field that guides the particle trajectory and evolves with it via a system of coupled differential equations. Most interpretations of quantum theory stem from the desire to solve the quantum measurement problem. Extensions of quantum theory In an attempt to reconcile quantum and classical physics, or to identify non-classical models with a dynamical causal structure, some modifications of quantum theory have been proposed. Collapse models Collapse models posit the existence of natural processes which periodically localize the wave-function. Such theories provide an explanation to the nonexistence of superpositions of macroscopic objects, at the cost of abandoning unitarity and exact energy conservation. Quantum measure theory In Sorkin's quantum measure theory (QMT), physical systems are not modeled via unitary rays and Hermitian operators, but through a single matrix-like object, the decoherence functional. The entries of the decoherence functional determine the feasibility to experimentally discriminate between two or more different sets of classical histories, as well as the probabilities of each experimental outcome. In some models of QMT the decoherence functional is further constrained to be positive semidefinite (strong positivity). Even under the assumption of strong positivity, there exist models of QMT which generate stronger-than-quantum Bell correlations. Acausal quantum processes The formalism of process matrices starts from the observation that, given the structure of quantum states, the set of feasible quantum operations follows from positivity considerations. Namely, for any linear map from states to probabilities one can find a physical system where this map corresponds to a physical measurement. Likewise, any linear transformation that maps composite states to states corresponds to a valid operation in some physical system. In view of this trend, it is reasonable to postulate that any high-order map from quantum instruments (namely, measurement processes) to probabilities should also be physically realizable. Any such map is termed a process matrix. As shown by Oreshkov et al., some process matrices describe situations where the notion of global causality breaks. The starting point of this claim is the following mental experiment: two parties, Alice and Bob, enter a building and end up in separate rooms. The rooms have ingoing and outgoing channels from which a quantum system periodically enters and leaves the room. While those systems are in the lab, Alice and Bob are able to interact with them in any way; in particular, they can measure some of their properties. Since Alice and Bob's interactions can be modeled by quantum instruments, the statistics they observe when they apply one instrument or another are given by a process matrix. As it turns out, there exist process matrices which would guarantee that the measurement statistics collected by Alice and Bob is incompatible with Alice interacting with her system at the same time, before or after Bob, or any convex combination of these three situations. Such processes are called acausal. See also Action at a distance Philosophy of physics Quantum computing Stern–Gerlach experiment Kochen–Specker theorem References Quantum mechanics Philosophy of physics
Quantum foundations
[ "Physics" ]
2,357
[ "Philosophy of physics", "Theoretical physics", "Applied and interdisciplinary physics", "Quantum mechanics" ]
46,838,138
https://en.wikipedia.org/wiki/Atomic%20layer%20etching
Atomic layer etching (ALE) is an emerging technique in semiconductor manufacture, in which a sequence alternating between self-limiting chemical modification steps which affect only the top atomic layers of the wafer, and etching steps which remove only the chemically-modified areas, allows the removal of individual atomic layers. The standard example is etching of silicon by alternating reaction with chlorine and etching with argon ions. This is a better-controlled process than reactive ion etching, though the issue with commercial use of it has been throughput; sophisticated gas handling is required, and removal rates of one atomic layer per second are around the state of the art. The equivalent process for depositing material is atomic layer deposition (ALD). ALD is substantially more mature, having been used by Intel for high-κ dielectric layers since 2007 and in Finland in the fabrication of thin film electroluminescent devices since 1985. References External links ECS-JSS focus journal on atomic layer etch Overview of atomic layer etching in the semiconductor industry Industrial processes Chemical processes Semiconductor device fabrication Etching (microfabrication)
Atomic layer etching
[ "Chemistry", "Materials_science" ]
230
[ "Microtechnology", "Etching (microfabrication)", "Chemical processes", "Semiconductor device fabrication", "nan", "Chemical process engineering" ]
46,840,722
https://en.wikipedia.org/wiki/Air%20pollution%20measurement
Air pollution measurement is the process of collecting and measuring the components of air pollution, notably gases and particulates. The earliest devices used to measure pollution include rain gauges (in studies of acid rain), Ringelmann charts for measuring smoke, and simple soot and dust collectors known as deposit gauges. Modern air pollution measurement is largely automated and carried out using many different devices and techniques. These range from simple absorbent test tubes known as diffusion tubes through to highly sophisticated chemical and physical sensors that give almost real-time pollution measurements, which are used to generate air quality indexes. Importance of measurement Air pollution is caused by many things. In urban environments, it can contain many components, notably solid and liquid particulates (such as soot from engines and fly ash escaping from incinerators), and numerous different gases (most commonly sulfur dioxide, nitrogen oxides, and carbon monoxide, all related to fuel combustion). These different forms of pollution have different effects on people's health, on the natural world (water, soil, crops, trees, and other vegetation), and on the built environment. Measuring air pollution is the first step in identifying its causes and then reducing or regulating them to keep the quality of the air inside legal limits (mandated by regulators such as the Environmental Protection Agency in the United States) or advisory guidelines suggested by bodies such as the World Health Organization (WHO). According to the WHO, over 6000 cities in 117 countries now routinely monitor the quality of their air. Types of measurement Air pollution is (broadly) measured in two different ways, passively or actively. Passive measurement Passive devices are relatively simple and low-cost. They work by soaking up or otherwise passively collecting a sample of the ambient air, which then has to be analyzed in a laboratory. One of the most common forms of passive measurement is the diffusion tube, which looks similar to a laboratory test tube and is fastened to something like a lamp post to absorb one or more specific pollutant gases of interest. After a period of time, the tube is taken down and sent to a laboratory for analysis. Deposit gauges, one of the oldest forms of pollution measurement, are another type of passive device. They are large funnels that collect soot or other particulates and drain them into sampling bottles, which, again have to be analyzed in a laboratory. Active measurement Active measurement devices are automated or semi-automated and tend to be more complex and sophisticated than passive devices, though they are not always more sensitive or reliable. They use fans to suck in the air, filter it, and either analyze it automatically there and then or collect and store it for later analysis in a laboratory. Active sensors use either physical or chemical methods. Physical methods measure an air sample without changing it, for example, by seeing how much of a certain wavelength of light it absorbs. Chemical methods change the sample in some way, through a chemical reaction, and measure that. Most automated air-quality sensors are examples of active measurement. Air quality sensors Air quality sensors range from small handheld devices to large-scale static monitoring stations in urban areas, and remote monitoring devices used on aeroplanes and space satellites. Personal air quality sensors At one end of the scale, there are small, inexpensive portable (and sometimes wearable), Internet-connected air pollution sensors, such as the Air Quality Egg and PurpleAir. These constantly sample particulates and gases and produce moderately accurate, almost real-time measurements that can be analyzed by smartphone apps. Their data can also be used in a crowdsourced way, either alone or with other pollution data, to build up maps of pollution over wide areas. They can be used for both indoor and outdoor environments and the majority focus on measuring five common forms of air pollution: ozone, particulate matter, carbon monoxide, sulfur dioxide, and nitrogen dioxide. Some measure less common pollutants such as radon gas and formaldehyde. Sensors like this were once expensive, but the 2010s saw a trend towards cheaper portable devices that can be worn by individuals to monitor their local air quality levels, which are now sometimes informally referred to as low-cost sensors (LCS). A recent review by the European Commission's Joint Research Center identified 112 examples, made by 77 different manufacturers. Personal sensors can empower individuals and communities to better understand their exposure environments and risks from air pollution. For example, a research group led by William Griswold at UCSD handed out portable air pollution sensors to 16 commuters, and found "urban valleys" where buildings trapped pollution. The group also found that passengers in buses have higher exposures than those in cars. Small-scale static pollution monitoring Unlike low-cost monitors, which are carried from place to place, static monitors continuously sample and measure the air quality in a particular, urban location. Public places such as busy railroad stations sometimes have active air quality monitors permanently fixed alongside platforms to measure levels of nitrogen dioxide and other pollutants. Some static monitors are designed to give immediate feedback on local air quality. In Poland, EkoSłupek air monitors measure a range of pollutant gases and particulates and have small lamps on top that change colour from red to green to signal how healthy the air is nearby. Large-scale pollution monitoring At the opposite end of the spectrum from low-cost sensors are the large, very expensive, static street-side monitoring stations that constantly sample the various different pollutants commonly found in urban air for local authorities and that make up metropolitan monitoring systems such as the London Air Quality Network and a wider British network called the Automatic Urban and Rural Network (AURN). In the United States, the EPA maintains a repository of air quality data through the Air Quality System (AQS), where it stores data from over 10,000 monitors. The European Environment Agency collects its air quality data from 3,500 monitoring stations across the continent. The measurements made by sensors like these, which are much more accurate, are also near real-time and are used to generate air quality indexes (AQIs). Between the two extremes of large-scale static and small-scale wearable sensors are medium-sized, portable monitors (sometimes mounted in large wheelable cases) and even built into "smog-mobile" sampling trucks. Recently, drive-by air pollution sensing systems have emerged as a promising approach for air quality monitoring, utilizing sensors mounted on taxis, buses, trams, and other vehicles. In particular, buses have garnered considerable attention as a mobile sensing platform due to their widespread availability and extensive geographical coverage. Remote monitoring Air quality can also be measured remotely, from the air, by lidar, drones, and satellites, through methods such as gas filter correlation. Among the earliest satellite pollution monitoring efforts were GOME (Global Ozone Monitoring Experiment), which measured global (tropospheric) ozone levels from the ESA European Remote Sensing Satellite (ERS-2) in 1995, and NASA's MAPS (Mapping Pollution with Satellites), which measured the distribution of carbon monoxide in Earth's lower atmosphere, also in the 1990s. Methods of measurement for different pollutants Each different component of air pollution has to be measured by a different process, piece of equipment, or chemical reaction. Analytical chemistry techniques used for measuring pollution include gas chromatography; various forms of spectrometry, spectroscopy, and spectrophotometry; and flame photometry. Particulates Until the late 20th century, the amount of soot produced by something like a smokestack was often measured visually, and relatively crudely, by holding up cards with lines ruled onto them to indicate different shades of grey. These were known as Ringelmann charts, after their inventor, Max Ringelmann, and measured smoke on a six-point scale. In modern pollution monitoring stations, coarse (PM10) and fine (PM2.5) particulates are measured using a device called a tapered element oscillating microbalance (TEOM), based on a glass tube that vibrates more or less as collected particles accumulate on it. Particulates can also be measured using other kinds of particulate matter sampler, including optical photodetectors, which measure the light reflected from samples of light (bigger particles reflect more light) and gravimetric analysis (collected on filters and weighed). Black carbon is usually measured optically with Aethalometer-type instruments. Ultrafine particles (smaller than PM0.1, so generally less than 100 nanometers in diameter) are hard to detect and measure with some of these techniques. Typically, they are measured (or counted) with condensation particle counters, which effectively enlarge the particles by condensing vapors onto them to make bigger and much more easily detectable droplets. The atomic composition of particulate samples can be measured with techniques such as X-ray spectrometry. Nitrogen dioxide Nitrogen dioxide () can be measured passively with diffusion tubes, though it takes time to collect samples, analyze them, and produce results. It can be measured manually or automatically through the Griess-Saltzman method, as specified in ISO 6768:1998, or the Jacobs-Hocheiser method. It can also be measured automatically much more quickly, by a chemiluminescence analyzer, which determines nitrogen oxide levels from the light they give off. In the UK, for example, there are over 200 sites where is continuously monitored by chemiluminescence. Sulfur dioxide and hydrogen sulfide Sulfur dioxide () is measured by fluorescence spectroscopy. This involves firing ultraviolet light at a sample of the air and measuring the fluorescence produced. Absorption spectrophotometers are also used for measuring . Flame photometric analyzers are used for measuring other sulphur compounds in the air. Older methods of measuring sulfur dioxide involved passing air samples through glass bottles containing iodine, hydrogen peroxide, or sodium or potassium tetrachloromercurate. Carbon monoxide and carbon dioxide Carbon monoxide (CO) and carbon dioxide () are measured by non-dispersive infrared (NDIR) light absorption based on the Beer-Lambert law. CO can also be measured using electrochemical gel sensors and metal-oxide semiconductor (MOS) detectors, which are used in household carbon monoxide detectors. Ozone Ozone () is measured by seeing how much light a sample of ambient air absorbs. Higher concentrations of ozone absorb more light according to the Beer-Lambert law. Volatile organic compounds (VOCs) These are measured using gas chromatography and flame ionization (GC-FID). Hydrocarbons Hydrocarbons can be measured by gas chromatography and flame ionization detectors. They are sometimes expressed as separate measurements of methane (), NMHC (non-methane hydrocarbons), and THC (total hydrocarbon) emissions (where THC is the sum of and NMHC emissions). Ammonia Ammonia () can be measured by various methods including chemiluminescence. Natural measurements Air pollution can also be assessed more qualitatively by observing the effect of polluted air on growing plants such as lichens and mosses (an example of biomonitoring). Some scientific projects have used specially grown plants such as strawberries. Measurement units The amount of pollutant present in air is usually expressed as a concentration, measured in either parts-per notation (usually parts per billion, ppb, or parts per million, ppm, also known as the volume mixing ratio), or micrograms per cubic meter (μg/m³). It's relatively simple to convert one of these units into the other, taking account the different molecular weights of different gases and their temperatures and pressures. These units express the concentration of air pollution in terms of the mass or volume of the pollutant, and they are commonly used for measurements of both gaseous pollutants, such as nitrogen dioxide, and coarse (PM10) and fine (PM2.5) particulates. An alternative measurement for particulates, particle number, expresses the concentration in terms of the number of particles per volume of air instead, which can be a more meaningful way of assessing the health harms of highly toxic ultrafine particles (PM0.1, less than 0.1 μm in diameter). Particle number can be measured with equipment such as condensation particle counters. Urban air quality index (AQI) values are computed by combining or comparing the concentrations of a "basket" of common air pollutants (typically ozone, carbon monoxide, sulfur dioxide, nitrogen oxides, and both fine and coarse particulates) to produce a single number on an easy-to-understand (and often colour-coded) scale. History Air pollution was first systematically measured, in Britain, in the 19th century. In 1852, Scottish chemist Robert Angus Smith discovered (and named) acid rain after collecting rain samples that turned out to contain significant quantities of sulfur from coal burning. According to a chronology of air pollution by David Fowler and colleagues, Smith was "the first scientist to attempt multisite, multipollutant investigations of the chemical climatology of the polluted atmosphere". In the early 20th century, Irish physician and environmental engineer John Switzer Owens and the Committee for the Investigation of Atmospheric Pollution, of which he was secretary, greatly advanced the measurement and monitoring of air pollution using a network of deposit gauges. Owens also developed a number of new methods of measuring pollution. In December 1952, the Great Smog of London led to the deaths of 12,000 people. This event, and similar ones such as the 1948 Donora smog tragedy in the United States, became one of the great turning points in environmental history because they brought about a radical rethink in pollution control. In the UK, the Great Smog of London lead directly to the Clean Air Act, which may have had consequences even more far reaching than it originally intended. Catastrophic events like this led to pollution being measured and controlled much more rigorously. See also Air quality index Environmental monitoring References External links Science & Tech Spotlight: Air Quality Sensors Air pollution Atmospheric chemistry Measuring instruments Public health Pollution
Air pollution measurement
[ "Chemistry", "Technology", "Engineering" ]
2,907
[ "nan", "Measuring instruments" ]
46,840,724
https://en.wikipedia.org/wiki/Micromotor
Micromotors are very small particles (measured in microns) that can move themselves. The term is often used interchangeably with "nanomotor," despite the implicit size difference. These micromotors actually propel themselves in a specific direction autonomously when placed in a chemical solution. There are many different micromotor types operating under a host of mechanisms. Easily the most important examples are biological motors such as bacteria and any other self-propelled cells. Synthetically, researchers have exploited oxidation-reduction reactions to produce chemical gradients, local fluid flows, or streams of bubbles that then propel these micromotors through chemical media. Different stimuli, both external (light, magnetism) and internal (fuel concentration, material composition, particle asymmetry) can be used to control the behavior of these micromotors. Micromotors may have applications in medicine since they have been shown to be able to deliver materials to living cells within an organism. They also have been shown to be effective in degrading certain chemical and biological warfare agents. Janus Motor Propulsion Janus particle micromotors consist of two or more components with distinct physical properties, such as a titanium dioxide particle capped with gold, or a polystyrene bead coated on one side with a layer of platinum which both display a difference in catalytic activity between halves. When these motors are placed in a fuel, such as hydrogen peroxide, one redox half-reaction occurs on each pole according to catalytic activity. As the oxidation reaction produces electrons and protons, the reduction reaction consumes these as reactants on the opposite pole of the particle, this movement of molecules generates a fluid flow across the surface of the motor and this drives the particle forward. The catalytic difference between each pole of the Janus motor can be characteristic of the material such as metals which catalyze at different rates, or induced by external stimuli like UV light which can be absorbed by semi-conductor materials like titanium dioxide to excite electrons for the redox reaction. Catalytic activity is not the only way to generate motion using Janus materials; self-propelled Janus droplets can be made using a complex emulsion of two different surfactant oils which move forward spontaneously due to the difference in surface tension as the two oils solubilize. However, a Janus structure is not always required to break symmetry. For enzyme-attached particles or lipid vesicles, symmetry can be disrupted by the uneven distribution of enzymes on their surface. These discoveries offer new insights into designing synthetic micro/nanomotors. Nano particle Implementation Nano particle incorporation into micromotors has been recently studied and observed further. Specifically, gold nanoparticles have been introduced to the traditional titanium dioxide outer layer of most micromotors. The size of these gold nanoparticles typically is distributed from anywhere around 3 nm to 30 nm. Since these gold nanoparticles are layered on top of the inner core (usually a reducing agent, such as magnesium), there is enhanced macrogalvanic corrosion observed. Technically, this is where the cathode and anode are in contact with each other, creating a circuit. The cathode, as a result of the circuit, is corroded. The depletion of this inner core leads to the reduction of the chemical environment as a fuel source. For example, in a TiO2/Au/Mg micromotor in a seawater environment, the magnesium inner core would experience corrosion and reduce water to begin a chain of reactions that results in hydrogen gas as a fuel source. The reduction reaction is as follows: Applications Researchers hope that micromotors will be used in medicine to deliver medication and do other precise small-scale interventions. A study has shown that micromotors could deliver gold particles to the stomach layer of living mice. Photocatalytic Degradation of Biological and Chemical Warfare Agents Micromotors are capable of photocatalytic degradation with the appropriate composition. Specifically, micromotors with a titanium dioxide/gold nanoparticle outer layer and magnesium inner core are currently being examined and studied for their degradation efficacy against chemical and biological warfare agents (CBWA). These new TiO2/Au/Mg micromotors produce no reagents or toxic byproducts from the propulsion and degradation mechanisms. However, they are very effective against CBWAs and present a complete and rapid degradation of certain CBWAs. There has been recent research of TiO2/Au/Mg micromotors and their use and degradation efficacy against biological warfare agents, such as Bacillus anthracis, and chemical warfare agents, such as organophosphate nerve agents- a class of acetylcholinesterase inhibitors. Therefore, application of these micromotors is a possibility for medical and environmental applications. Photocatalytic Degradation Mechanism These new micromotors are composed of a photoactive photocatalyst outer/surface layer that often has active metal nanoparticles (platinum, gold, silver, etc.) on the surface as well. Under UV irradiation, the adsorbed water produces strongly oxidizing hydroxyl radicals. Also, adsorbed molecular O2 reacts with electrons producing superoxide anions. Those superoxide anions also produce to the production of peroxide radicals, hydroxyl radicals, and hydroxyl anions. Transformation into carbon dioxide and water, otherwise known as mineralization, of CWAs has been observed as a result of the radicals and anions. Also, the active metal nanoparticles effectively shift the Fermi level of the photocatalyst, enhancing the distribution of the electron charge. Therefore, the lifetime of the radicals and anions is extended, so the implementation of the active metal nanoparticles has greatly improved photocatalytic efficiency. Metal-Organic Framework (MOF) based Micromotors Metal–organic frameworks (MOFs) are a class of compounds that are composed of a metal ion cluster coordinated to an organic linker. These compounds can form 1D, 2D and 3D structures. They possess a porous morphology which can be tuned in terms of shape and size depending on the metal ion and organic linker used to form the MOF. These pores grants them great catalytic properties which is why MOF research focused on the catalytic degradation of contaminants for environmental remediation has been gaining more attention. The major limitation of MOFs is that they tend to settle at the bottom of the solution, reducing their effectiveness since they are not coming into contact with the contaminant. Thus, in the past years more and more research focused on MOF for catalytic degradation have been implementing micromotors. The MOF particles are half-coated with a metal, creating a Janus motor particle (half metal, half MOF). The motor aspect of the particle enhances its diffusion, increasing the probability of the MOF and contaminant encountering each other in solution, thus increasing its degradation rate. These MOF based micromotors have proven to be extremely efficient at decontaminating water, and after the fuel used for propulsion (in most cases hydrogen peroxide) is completely consumed, they settle at the bottom of the solution, facilitating the removal of the Janus motor particles from the solution. References Nanotechnology
Micromotor
[ "Materials_science", "Engineering" ]
1,488
[ "Nanotechnology", "Materials science" ]
42,221,853
https://en.wikipedia.org/wiki/Phi1%20Orionis
{{DISPLAYTITLE:Phi1 Orionis}} Phi1 Orionis is a binary star system in the constellation Orion, positioned less than a degree to the south of Meissa. It is visible to the naked eye with an apparent visual magnitude of 4.42. The distance to this system, based upon an annual parallax shift of 3.0 mas, is around 1,090 light-years. This is a single-lined spectroscopic binary star system with an orbital period of 3,068 days and an eccentricity of 0.22. It is a member of the young Lambda Orionis cluster and is roughly 7 million years old. The primary component is a B-type giant star with a stellar classification of B0 III. It has over 15 times the mass of the Sun and around 6.3 times the Sun's radius. Nothing is known about the secondary companion. It does not contribute a significant amount of light to the combined spectrum. References External links Spectroscopic binaries B-type giants Orionis, Phi Orion (constellation) Durchmusterung objects Orionis, 37 036822 026176 1876
Phi1 Orionis
[ "Astronomy" ]
239
[ "Constellations", "Orion (constellation)" ]
42,224,258
https://en.wikipedia.org/wiki/CFBDS%20J005910.90%E2%80%93011401.3
CFBDS J005910.90−011401.3 (also CFBDS J0059−0114 or CFBDS0059) is a brown dwarf with a low temperature of only 625 K, located in constellation Cetus about 30 light-years away. References Cetus Brown dwarfs CFBDS objects
CFBDS J005910.90–011401.3
[ "Astronomy" ]
69
[ "Cetus", "Constellations" ]
42,224,368
https://en.wikipedia.org/wiki/Synthetic%20virology
Synthetic virology is a branch of virology engaged in the study and engineering of synthetic man-made viruses. It is a multidisciplinary research field at the intersection of virology, synthetic biology, computational biology, and DNA nanotechnology, from which it borrows and integrates its concepts and methodologies. There is a wide range of applications for synthetic viral technology such as medical treatments, investigative tools, and reviving organisms. Constructing de novo synthetic viruses Advances in genome sequencing technology and oligonucleotide synthesis paved the way for construction of synthetic genomes based on previously sequenced genomes. Both RNA and DNA viruses can be made using existing methods. RNA viruses have historically been utilized due to the typically small genome size and existing reverse transcription machinery present. The first man-made infectious viruses generated without any natural template were of the polio virus and the φX174 bacteriophage. With synthetic live viruses, it is not whole viruses that are synthesized but rather their genome at first, both in the case of DNA and RNA viruses. For many viruses, viral RNA is infectious when introduced into a cell (during infection or after reverse transcription). These organisms are able to sustain an infectious life cycle upon introduction in vivo. Applications This technology is now being used to investigate novel vaccine strategies. The ability to synthesize viruses has far-reaching consequences, since viruses can no longer be regarded as extinct, as long as the information of their genome sequence is known and permissive cells are available. As of March 2020, the full-length genome sequences of 9,240 different viruses, including the smallpox virus, are publicly available in an online database maintained by the National Institutes of Health. Synthetic viruses have also been researched as potential gene therapy tools. See also Bioterrorism Disease X References External links First synthetic polio virus (2002) – First synthetic bacteriophage, φX174 (2003) – Codagenix – Synthetic virology technology to investigate novel vaccine strategies SynVaccine – Synthetic virology technology to investigate novel vaccine strategies West Nanorobotics – Metamorphic bacteriophage MV-28 (2019), Chimeric bacteriophage MV-3 (2018), Extremophile chickenpox vector CPV-2 (2017), and Multivalent viral vector MRHHS MV-5 (2016), synthetic virology technology to investigate anti-bacterial viruses and gene therapy vectors for cancer History of virology Synthetic biology
Synthetic virology
[ "Engineering", "Biology" ]
515
[ "Synthetic biology", "Viruses", "Biological engineering", "Virus stubs", "Bioinformatics", "Molecular genetics" ]
42,232,047
https://en.wikipedia.org/wiki/Human%20engineered%20cardiac%20tissues
Human engineered cardiac tissues (hECTs) are derived by experimental manipulation of pluripotent stem cells, such as human embryonic stem cells (hESCs) and, more recently, human induced pluripotent stem cells (hiPSCs) to differentiate into human cardiomyocytes. Interest in these bioengineered cardiac tissues has risen due to their potential use in cardiovascular research and clinical therapies. These tissues provide a unique in vitro model to study cardiac physiology with a species-specific advantage over cultured animal cells in experimental studies. hECTs also have therapeutic potential for in vivo regeneration of heart muscle. hECTs provide a valuable resource to reproduce the normal development of human heart tissue, understand the development of human cardiovascular disease (CVD), and may lead to engineered tissue-based therapies for CVD patients. Generation hESCs and hiPSCs are the primary cells used to generate hECTs. Human pluripotent stem cells are differentiated into cardiomyocytes (hPSC-CMs) in culture through a milieu containing small-molecule mediators (e.g. cytokines, growth and transcription factors). Transforming hPSC-CMs into hECTs incorporates the use of 3-dimensional (3D) tissue scaffolds to mimic the natural physiological environment of the heart. This 3D scaffold, along with collagen – a major component of the cardiac extracellular matrix – provides the appropriate conditions to promote cardiomyocyte organization, growth and differentiation. Characteristics At the intracellular level, hECTs exhibit several essential structural features of cardiomyocytes, including organized sarcomeres, gap-junctions, and sarcoplasmic reticulum structures; however, the distribution and organization of many of these structures is characteristic of neonatal heart tissue rather than adult human heart muscle. Recently, the combined effects of electrical and dynamic stimulation were found to significantly enhance the functional maturation of hECTs, resulting in improved alignment, structure, and organization, enhanced calcium handling capacity, increased expression of contractile and structural protein genes, and enhanced vascular network formation, closely resembling healthy in vivo conditions. hECTs also express key cardiac genes (α-MHC, SERCA2a and ACTC1) nearing the levels seen in the adult heart. Analogous to the characteristics of ECTs from animal models, hECTs beat spontaneously and reconstitute many fundamental physiological responses of normal heart muscle, such as the Frank-Starling mechanism and sensitivity to calcium. hECTs show dose-dependent responses to certain drugs, such as morphological changes in action potentials due to ion channel blockers and modulation of contractile properties by inotropic and lusitropic agents. Experimental and clinical applications Even with current technologies, hECT structure and function is more at the level of newborn heart muscle than adult myocardium. Nonetheless, important advances have led to the generation of hECT patches for myocardial repair in animal models and use for in vitro models of drug screening. hECTs can also be used to experimentally model CVD using genetic manipulation and adenoviral-mediated gene transfer. In animal models of myocardial infarction (MI), hECT injection into the hearts of rats and mice reduces infarct size and improves heart function and contractility. As a proof of principle, grafts of engineered heart tissues have been implanted in rats following MI with beneficial effects on left ventricular function. The use of hECTs in generating tissue engineered heart valves is also being explored to improve current heart valve constructs for in vivo animal studies. As tissue engineering technology advances to overcome current limitations, hECTs are a promising avenue for experimental drug discovery, screening and disease modelling and in vivo repair. References Stem cell research Tissue engineering Muscle tissue Tissue transplants
Human engineered cardiac tissues
[ "Chemistry", "Engineering", "Biology" ]
774
[ "Biological engineering", "Stem cell research", "Cloning", "Chemical engineering", "Translational medicine", "Tissue engineering", "Medical technology" ]
53,723,703
https://en.wikipedia.org/wiki/Antibiotic%20properties%20of%20nanoparticles
Nanoparticles have been studied extensively for their antimicrobial properties in order to fight super bug bacteria. Several characteristics in particular make nanoparticles strong candidates as a traditional antibiotic drug alternative. Firstly, they have a high surface area to volume ratio, which increases contact area with target organisms. Secondly, they may be synthesized from polymers, lipids, and metals. Thirdly, a multitude of chemical structures, such as fullerenes and metal oxides, allow for a diverse set of chemical functionalities. The key to nanoparticle efficacy against antibiotic resistant strains of bacteria lies in their small size. On the nano scale, particles can behave as molecules when interacting with a cell which allows them to easily penetrate the cell membrane and interfere in vital molecular pathways if the chemistry is possible. While their antibiotic properties against certain pathogens are important, oral antibiotics packaged in lipid nanoparticles can reduce collateral damage on the gut microbiota. Metal Nanoparticles A strong research focus has been placed on triggering production of excessive reactive oxygen species (ROS) using nanoparticles injected into bacterial cells. The presence of excessive ROS can stress the cell structure leading to damaged DNA/RNA, decreased membrane activity, disrupted metabolic activity, and harmful side reactions generating chemicals such as peroxides. ROS production has been induced generally through the introduction of both metal oxide and positively charged metal nanoparticles in the cell, such as iron oxides and silver. The positive charge of the metal is attracted to the negative charge of the cell membrane which it then easily penetrates. Redox reactions take place in the cell between the metals and oxygen containing species in the cell to produce ROS. Other novel techniques include utilizing quantum dots such as cadmium telluride, under a bright light source to excite and release electrons; this process initializes ROS production similar to the metal nanoparticles. Carbon Structures Carbon nanostructures such as graphene oxide (GO) sheets, nano tubes, and fullerenes have proven antimicrobial properties when used synergistically with other methods. UV radiation directed at GO sheets, for example, disrupts bacterial cell activity and colony growth via ROS production. Doping nano tubes or fullerenes with silver or copper nanoparticles may also harm the cells ability to grow and replicate DNA. Nano tubes and fullerenes in particular are being studied as aqueous dispersions rather than polymers, metals or other traditional dry solid particulates. The exact mechanism which promotes this synergy is not clearly understood but it is believed to be linked to the unique surface chemistry of carbon nanostructures (i.e. the large aspect ratio of carbon nanotubes, high surface energy in GO sheets). Human applications of carbon nano materials have not been tested due to the unknown potential hazards. Current research on the carcinogenic effects, if any, of carbon nanostructures is still in its infancy and there is therefore no clear consensus on the topic. Drug Synergies Nanoparticles can enhance the effects of traditional antibiotics which a bacterium may have become resistant to, and decrease the overall minimum inhibitory concentration (MIC) required for a drug. Silver nanoparticles improve the activity of amoxicillin, penicillin, and gentamicin in bacteria by altering membrane permeability and improving drug delivery. nanoparticles themselves may have antimicrobial properties enhanced or induced with the addition of organic drugs. Gold particles, while not inherently antimicrobial, were discovered to express antimicrobial properties when functionalized with ampicillin. In addition to this, gold nanoparticles demonstrated improved membrane permeability with the addition of 4,6-diamino-2-pyrimidenthiol (DAPT) and non-antibiotic amines (NAA) to their surfaces. References Nanomedicine
Antibiotic properties of nanoparticles
[ "Materials_science", "Biology" ]
805
[ "Biotechnology products", "Antibiotics", "Nanomedicine", "Nanotechnology", "Biocides" ]
53,730,611
https://en.wikipedia.org/wiki/Anthroponics
Anthroponics is a type of hydroponics system that uses human waste like urine as the source of nutrients for the cultivated plants. In general, the human urine or mixed waste is collected and stored for a period of time, before being applied either directly or passed through a biofilter before reaching the plants. As a form of organic hydroponics, anthroponics combines elements of both hydroponics and aquaponics systems. History While human waste has historically been used as a fertilizer, its use in soilless systems is a recent field of research. The earliest published research on the topic is in 1991 by researcher Guterstam, B., in which the treatment of domestic wastewater by an aquaculture and hydroponic mesocosm is described. Since then, other researchers have explored both human mixed waste and human urine as nutrient sources for hydroponic cultivation, studying the potential of such waste, comparisons to traditional fertilizers, both in the context of wastewater treatment, agriculture, and even space exploration. Urine as a fertilizer Urine is 91-96% water, with urea constituting the largest amount of solids, and the rest being inorganic salts and organic compounds, including proteins, hormones, and a wide range of metabolites The urea in urine naturally converts into ammonia through a process known as ammonia volatilization from urea. This process, which can take between 5 weeks to 6 months, increases the pH of the liquid to 9, thus sterilizing it. The time it takes for this process to occur can be drastically reduced to hours or minutes through the addition of the urease enzyme, which can be synthesized or found in watermelon seeds. The sterilized and volatilized liquid is then passed through a biofilter where nitrifying bacteria convert the ammonia to nitrate, a more plant available form of nitrogen. It has been experimentally shown that on average 0.47mL of human urine can grow 1 gram of lettuce, therefore given that on average an adult human produces around 1.4 litres of urine in just one day, it is estimated that one human adult could produce almost 3 kg of lettuce from the volume of urine excreted in just one day. Wood ash has also been used to supplement urine when cultivating cucumbers, as they require more nutrients than those found in urine. Hydroponic subsystem After the biofilter, the water is transported to the hydroponic component where the plants are located, and where they will absorb the nutrients, cleaning the water before it returns to the biofilter. Almost all techniques used in hydroponics and aquaponics are also applicable to anthroponics. These include: Deep water culture, Nutrient film technique, and Media beds. Advantages Urine based solutions in hydroponics research seem to have been developed out of sustainability concerns with current mineral based hydroponic solutions. Mineral based commercial nutrient solutions are resource intensive and energy demanding, while also producing a lot of waste. The activities involved in its production include: mining, ore treatment, chemical processing, and transportation, which result in the required nutrients for the final solution. The whole process requires fossil fuels, electricity, chemicals, and water, while producing the nutrient solution, but also mining waste, greenhouse gases, and wastewater. By comparison, using urine as the nutrient source requires the collection of urine, electricity, some nutrient salts, and water, while producing no waste, limited greenhouse gases, and the final nutrient solution. Disadvantages Some disadvantages concerning the use of urine as the nutrient source in an hydroponics system include strict laws concerning the use of human waste in food crops, the unpleasant handling and odors produced by human urine, and the release of persistent organic pollutants and trace metals in human urine. References Hydroponics Sewerage
Anthroponics
[ "Chemistry", "Engineering", "Environmental_science" ]
788
[ "Sewerage", "Environmental engineering", "Water pollution" ]
53,730,614
https://en.wikipedia.org/wiki/Family%20symmetries
In particle physics, the family symmetries or horizontal symmetries are various discrete, global, or local symmetries between quark-lepton families or generations. In contrast to the intrafamily or vertical symmetries (collected in the conventional Standard Model and Grand Unified Theories) which operate inside each family, these symmetries presumably underlie physics of the family flavors. They may be treated as a new set of quantum charges assigned to different families of quarks and leptons. Spontaneous symmetry breaking of these symmetries is believed to lead to an adequate description of the flavor mixing of quarks and leptons of different families.  This is certainly one of the major problems that presently confront particle physics. Despite its great success in explaining the basic interactions of nature, the Standard Model still suffers from an absence of such a unique ability to explain the flavor mixing angles or weak mixing angles (as they are conventionally referred to) whose observed values are collected in the corresponding Cabibbo–Kobayashi–Maskawa matrices. While being conceptually useful and leading in some cases to the physically valuable patterns of the flavor mixing, the family symmetries are not yet observationally confirmed. Introduction The Standard Model is based on the internal symmetries of the unitary product group  the members of which have a quite different nature. The color symmetry  has the vectorlike structure due to which the lefthanded and righthanded quarks are transformed identically as its fundamental triplets. At the same time, the electroweak symmetry consisting of the weak isospin and hypercharge is chiral. So, the lefthanded components of all quarks and leptons are the doublets, whereas their righthanded components are its singlets: Here, the quark-lepton families are numbered by the index both for the quark and lepton ones. The up and down righthanded quarks and leptons are written separately and for completeness the righthanded neutrinos are also included. Many attempts have been made to interpret the existence of the quark-lepton families and the pattern of their mixing in terms of various family symmetries – discrete or continuous, global or local. Among them, the abelian and non-abelian and family symmetries seem to be most interesting. They provide some guidance to the mass matrices for families of quarks and leptons, leading to relationships between their masses and mixing parameters. In the framework of the supersymmetric Standard Model, such a family symmetry should at the same time provide an almost uniform mass spectrum for superpartners, with a high degree of the family flavor conservation, that makes its existence even more necessary in the SUSY case. The U(1) symmetry case This class of the family symmetry models was first studied by Froggatt and Nielsen in 1979 and extended later on in. In this mechanism, one introduces a new complex scalar field called flavon whose vacuum expectation value (VEV) presumably breaks a global family symmetry imposed. Under this symmetry different quark-lepton families carry different charges . Aсcordingly, the connection between families is provided by inclusion into play (via the relevant see-saw mechanism) some intermediate heavy fermion(s) being properly charged under the family symmetry . So, the effective Yukawa coupling constants for quark-lepton families are arranged in a way that they may only appear through the primary couplings of these families with the messenger fermion(s) and the flavon field . The hierarchy of these couplings is determined by some small parameter , which is given by ratio of the flavon VEV to the mass of the intermediate heavy fermion,   (or , if the messenger fermions have been integrated out at some high-energy cut-off scale). Since different quark-lepton families carry different charges the various coupling constants are suppressed by different powers of being primarily controlled by the postulated fermion charge assignment. Specially, for quarks these couplings acquire the form where the index stands for the particular family of the up quarks () and down quarks () including their lefthanded and righthanded components, respectively. This hierarchy is then transferred to their mass matrices once the conventional Standard Model Higgs boson develops its own VEV, . So, the mass matrices being proportional to the matrices of Yukawa coupling constants can generally produce (by an appropriate choice of the family charges) the required patterns for the weak mixing angles which are in basic conformity with the corresponding Cabibbo–Kobayashi–Maskawa matrices observed. In the same way the appropriate  mass matrices can also be arranged for the lepton families. Among some other applications the family symmetry, the most interesting one could stem from its possible relation to (or even identification) with the Peccei–Quinn symmetry. This may point out some deep connection between the fermion mixing problem and the strong CP problem of the Standard Model that was also discussed in the literature. The SU(2) family symmetry The family symmetry models were first addressed by Wilczek and Zee in 1979 and then the interest in them was renewed in the 1990s especially in connection with the Supersymmetric Standard Model. In the original model the quark-lepton families fall into the horizontal triplets of the local symmetry taken. Fortunately, this symmetry is generically free from the gauge anomaly problem which may appear for other local family symmetry candidates. Generally, the model contains the set of the Higgs boson multiplets being scalar, vector and tensor of , apart from they all are the doublets of the conventional electroweak symmetry . These scalar multiplets provide the mass matrices for quarks and leptons giving eventually the reasonable weak mixing angles in terms of the fermion mass ratios. In principle, one could hope to reach it in a more economic way when the heavy family masses appears at the tree-level, while the light families acquire their masses from radiative corrections at the one–loop level and higher ones. Another and presumably more realistic way of using of the family symmetry is based on the picture that, in the absence of flavor mixing, only the particles belonging to the third generation ( ) have non-zero masses. The masses and the mixing angles of the light first and second families being doublets of the symmetry appear then as a result of the tree-level mixings of families, related to spontaneous breaking of this symmetry. The VEV hierarchy of the horizontal scalars are then enhanced by the effective cut-off scale involved. Again, as in the above symmetry case, the family mixings are eventually turned out to be proportional to powers of some small parameter, which are determined by the dimensions of the family symmetry allowed operators. This finally generate the effective (diagonal and off-diagonal Yukawa couplings for the light families in the framework of the (ordinary or supersymmetric) Standard Model. In supersymmetric theories there are mass and interaction matrices for the squarks and sleptons, leading to a rich flavor structure. In particular, if fermions and scalars of a given charge have mass matrices which are not diagonalized by the same rotation, new mixing matrices occur at gaugino vertices. This may lead in general to the dangerous light family flavor changing processes unless the breaking of symmetry, which controls the light family sector, together with small fermion masses yields the small mass splittings of their scalar superpartners. Apart from with all that, there is also the dynamical aspect of the local symmetry, related to its horizontal gauge bosons. The point is, however, that these bosons (as well as various Higgs bosons involved) have to be several orders of magnitude more massive than the Standard Model W and Z bosons  in order to avoid forbidden quark-flavor- and lepton-flavor-changing transitions. Generally, this requires the introduction of additional Higgs bosons to give the large masses to the horizontal gauge bosons so as to not disturb the masses of the fermions involved. The chiral SU(3) symmetry alternative It can be generally argued that the presumably adequate family symmetry should be chiral rather than vectorlike, since the vectorlike family symmetries do not in general forbid the large invariant masses for quark-lepton families. This may lead (without some special fine tuning of parameters) to the almost uniform mass spectra for them that would be natural if the family symmetry were exact rather than broken. Rather intriguingly, both known examples of the local vectorlike symmetries, electromagnetic and color , appear to be exact symmetries, while all chiral symmetries including the conventional electroweak symmetry and grand unifications SU(5), SO(10) and E(6) appear broken. In this connection, one of the most potentially relevant option considered in the literature may be associated with the local chiral family symmetry introduced by Chkareuli in 1980 in the framework of the family-unified symmetry and further developed by its own. Motivation The choice of the as the underlying family symmetry beyond the Standard Model appears related to the following issues: (i) It provides a natural explanation of the number three of observed quark-lepton families correlated with three species of massless or light neutrinos contributing to the invisible Z boson  partial decay width; (ii) Its local nature conforms with the other local symmetries of the Standard Model, such as the weak isospin symmetry or color symmetry . This actually leads to the family-unified Standard Model with a total symmetry which then breaks at some high family scale down to the conventional SM; (iii) Its chiral nature, according to which the left-handed and right-handed fermions are proposed to be, respectively, the fundamental triplets and antitriplets of the symmetry. This means that their masses may only appear as a result of its spontaneous symmetry breaking of the whose anisotropy in the family flavor space provides the hierarchical mass spectrum of quark-lepton families; (iv) The invariant Yukawa couplings are always accompanied by an accidental global chiral symmetry which can be identified with the Peccei–Quinn symmetry, thus giving a solution to the strong CP problem; (v) Due to its chiral structure, it admits a natural unification with conventional Grand unified theories in a direct product form, such as , or , and also as a subgroup of the extended (family-unified) or GUTs; (vi) It has a straightforward extension to the supersymmetric Standard Model and GUTs. With these natural criteria accepted, other family symmetry candidates have turned out to be at least partially discriminated. Indeed, the family symmetry does not satisfy the criterion (i) and is in fact applicable to any number of quark-lepton families. Also, the family symmetry can contain, besides two light families treated as its doublets, any number of additional (singlets or new doublets of ) families. All global non-Abelian symmetries are excluded by the criterion (ii), while the vectorlike symmetries are excluded by the criteria (iii) and (v). Basic applications In the Standard Model and GUT extended by the local chiral symmetry quarks and leptons are supposed to be chiral triplets, so that their left-handed (weak-doublet) components – and – are taken to be the triplets of , while their right-handed (weak-singlet) components – , ,   and – are anti-triplets (or vice versa). Here is the family symmetry index ( ), rather than the index introduced in Section in order to simply number all the families involved. The spontaneous breaking of this symmetry gives some understanding to the observed hierarchy between elements of the quark-lepton mass matrices and presence of texture zeros in them. This breaking is normally provided by some set of the horizontal scalar multiplets being symmetrical and anti-symmetrical under the ,  and ( = 1, 2, ..., = 1, 2, ...). When they develop their VEVs, the up and down quark families acquire their effective Yukawa coupling constants which generally have a form where again the index stands for the particular family of the up quarks ( ) and down quarks ( ), respectively ( and are some dimensionless proportionality constants of the order).  These coupling constants normally appear via the sort of the see-saw mechanism due to the exchange of a special set of heavy (of order the family symmetry scale ) vectorlike fermions. The VEVs of the horizontal scalars taken in general as large as , are supposed to be hierarchically arranged along the different directions in family flavor space. This hierarchy is then transferred to their mass matrices and , when the conventional Standard Model Higgs boson develops its own VEV in the corresponding Yukawa couplings In the minimal case with one  sextet and two triplets developing the basic VEV configuration one comes the typical nearest-neighbor family mixing pattern in the mass matrices and  that leads to the weak mixing angles being generally in approximate conformity with the corresponding Cabibbo–Kobayashi–Maskawa matrices. In the same way, the appropriate  mass matrices can also be arranged for the lepton families that leads to the realistic description – both in the Standard Model and  GUT – of the lepton masses and mixings, including neutrino masses and oscillations. In the framework of supersymmetric theories, the family   symmetry hand in hand with hierarchical masses and mixings for quarks and leptons leads to an almost uniform mass spectrum for their superpartners with a high degree of flavor conservation. Due to the special relations between the fermion mass matrices and soft SUSY breaking terms, dangerous supersymmetric contributions to the flavor-changing processes can be naturally suppressed. Among other applications of the symmetry, the most interesting ones are those related to its gauge sector. Generally, the family scale may be located in the range from GeV up to the grand unification scale and even higher. For the relatively low family scale , the gauge bosons will also enter into play so that there may become important many flavor-changing rare processes including some of their astrophysical consequences. In contrast to the vectorlike family symmetries the chiral is not generically free from gauge anomalies. They, however, can be readily cancelled by introduction of the appropriate set of the pure horizontal fermion multiplets. Being sterile with respect to all the other Standard Model interactions, they may treated as one of possible candidates for a dark matter in the Universe. The special sector of applications is related to a new type of topological defects – flavored cosmic strings and monopoles – which can appear during the spontaneous violation of the which may be considered as possible candidates for the cold dark matter in the Universe. Summary Despite some progress in understanding the family flavor mixing problem, one still has the uneasy feeling that, in many cases, the problem seems just to be transferred from one place to another. The peculiar quark-lepton mass hierarchy is replaced by a peculiar set of flavor charges or a peculiar hierarchy of the horizontal Higgs field VEVs in the non-abelian symmetry case or . As a result,  there are not so many distinctive and testable generic predictions relating the weak mixing angles to the quark-lepton masses that could distinctively differentiate the one family symmetry model from the other. This indeed related to the fact that Yukawa sector in the theory is somewhat arbitrary as compared with its gauge sector. Actually, one can always arrange the flavor charges of families or the VEVs of horizontal scalars in these models in a way to get the acceptable hierarchical mass matrices for quarks and relatively smooth ones for leptons. As matter of fact, one of the possible ways for these models to have their own specific predictions might appear if nature would favor the local family symmetry case. This would then allow to completely exclude the global family symmetry case and properly differentiate the non-Abelian and symmetry cases. All that is possible, of course, provided that the breaking scale of such a family symmetry is not as large as the GUT scale or Planck scale. Otherwise, all the flavor-changing processes caused by the exchanges of the horizontal gauge bosons will be, therefore, vanishingly suppressed. Another way for these models to be distinguished might appear, if they were generically being included in some extended GUT. In contrast to many others, such a possibility appears for the chiral family symmetry (considered in the previous section) which could be incorporated into the family-unified symmetry. Even if this GUT would not provide the comparatively low family symmetry scale, the existence of several multiplets of extra heavy  fermions in the original SU(8) matter sector could help with a model verification. Some of them through a natural see-saw mechanism could provide the physical neutrino masses which, in contrast to conventional picture, may appear to follow both the direct or inverted family hierarchy. Others mix with ordinary quark-lepton families in a way that there may arise a marked violation of unitarity in the CKM matrix. It is also worth pointing out some important aspect related to the family symmetries. As matter of fact, an existence of three identical quark-lepton families could mean that there might exist the truly elementary fermions, preons, being actual carriers of all the Standard Model fundamental quantum numbers involved and composing the observed quarks and leptons at larger distances. Generally, certain regularities in replications of particles may signal about their composite structure. Indeed, just regularities in the spectroscopy of hadrons observed in the nineteen-sixties made it possible to discover the constituent quark structure of hadrons. As to the quarks and leptons, it appears that an idea of their composite structure may distinguish the local chiral family symmetry among other candidates. Namely, the preon model happens under certain natural conditions to determine a local “metaflavor” symmetry as a basic internal symmetry of the physical world at small distances. Being exact for preons, it gets then broken at large distances down to a conventional  SU(5) GUT with an extra local family symmetry and three standard families of composite quarks and leptons. References Symmetry Physics beyond the Standard Model
Family symmetries
[ "Physics", "Mathematics" ]
3,772
[ "Unsolved problems in physics", "Particle physics", "Geometry", "Physics beyond the Standard Model", "Symmetry" ]
40,802,552
https://en.wikipedia.org/wiki/Screen%20scroll%20centrifuge
Screen/Scroll centrifuge is a filtering or screen centrifuge which is also known as worm screen or conveyor discharge centrifuge. This centrifuge was first introduced in the midst of 19th century. After developing new technologies over the decades, it is now one of the widely used processes in many industries for the separation of crystalline, granular or fibrous materials from a solid-liquid mixture. Also, this process is considered to dry the solid material. This process has been some of the most frequently seen within, especially, coal preparation industry. Moreover, it can be found in other industries such as chemical, environmental, food and other mining fields. Fundamentals Screen scroll centrifuge is a filtering centrifuge which separates solids and liquid from a solid-liquid mixture. This type of centrifuge is commonly used with a continuous process in which slurry containing both solid and liquid is continuously fed into and continuously discharged from the centrifuge. In a typical screen scroll centrifuge, the basic principle is that entering feed is separated into liquid and solids as two products. The feed is transported from small to larger diameter end of frustoconical basket by the inclination of the screen basket and slightly different speed of the scraper worm. The solid material retained on the screen is moved along the cone via an internal screw conveyor while the liquid output is obtained due to centrifugal force causes the feed slurry to pass through the screen openings. Furthermore, screen scroll centrifuge may rotate either in horizontal or vertical position. Range of applications The use of screen scroll centrifuge has been seen in numerous process engineering industries. One of the most noticeable applications is within coal preparation industry. In addition to that, this centrifuge is also employed in the dewatering of potash, gilsonite, in salt processes and in dewatering various sands. Moreover, it is also designed for use in the food processing industry, for instant, dairy production, and cocoa butter equivalents and other confectionery fats. Designs available Screen scroll centrifuges, which are also known as worm screen or the conveyor discharge, instigate the solids to move along the cone through an internal screw conveyor. The conveyor in the centrifuge spins at a differential speed to the conical screen and centrifugal forces approximately 1800g - 2600g facilitate reasonable throughputs. Some of the screen scroll centrifuges are available with up to four separate stages for improved performance. The first stage is used to de-liquor the feed which is followed by a washing stage, with the final stage being used for drying. In an advanced screen scroll centrifuge with four stages, two separate washes are employed in order to segregate the wash liquors. The two most common types of screen/scroll centrifuge used in many industrial applications are vertical screen/scroll centrifuge and horizontal screen/scroll centrifuge. Vertical screen scroll centrifuge Vertical screen scroll is built with the main components of screen, scroll, basket, housing, and helical screw. Feed containing liquid and solid materials is introduced into vertical screen scroll centrifuge from the top. This is sped up by centrifugal acceleration produced from the rotating parts contacted. As such, centrifugal force slings liquids through the openings, while solids are held on the screen surface as they cannot pass through because of granular particles larger than the screen pores or due to agglomeration. Movement of solids across the screen surface is manipulated by flights. Liquids that have gone through screen are obtained and discharged through effluent outlet from the side of machine, while solids collected from the screen fall by gravity through the bottom discharge of the machine. Some of the available vertical screen scroll centrifuges are CMI model EBR and CMI model EBW which are manufactured by Centrifugal & Mechanical Industries (CMI). The former can dewater coarser particles size ranging from 1.5 in to 28 mesh whereas the latter can dewater finer particles size ranging from 1 mm to 150 mesh. Horizontal screen scroll centrifuge Similar to a vertical screen scroll centrifuge, a horizontal screen scroll centrifuge is constructed of several main parts: screen, scroll, basket, housing, and helical screw. The screen and the basket with frustoconical geometry are assembled into the housing in a horizontal axis. Inside the frustoconical structure there is a tubular wall. Inside the tubular wall there is a cylinder of helical screw which flight on scroll pass. The tubular wall will have a slightly different angular speed to the helical screw. The solid-liquid mixture is fed into the closed rearward portion of the scroll. The rotation movement of the scroll, screen, and basket allows the liquid to pass through from the openings on the screen (via centrifugal force). The solid remains will be separated according to size due to the difference of the angular velocity of the helical screw and the basket. The helical screw pushes the solid material to be discharged to the forward end of the scroll. The processing time depends on helical screw pitch and the angular velocity difference. It may also be influenced by the design of the scroll feed opening. The solid particles exiting are usually collected via a conveyor in the collection unit. Main process characteristics and its assessment The performance and output efficiency of the screen scroll centrifuge can be affected by several factors, such as particle size and feed concentration, flow rate of feed and screen mesh size of the centrifuge. Particle size and feed solids Particle size in the feed is one of the most important parameters to be taken into account since the choice of slot and screen holes size of screen scroll centrifuge or different types of process depends on feed contents. Non-uniform particles size in the feed can cause partial blockage on the screen due to the small size solids blocking the holes besides normal and larger particles. So, liquids flow over the screen instead of passing through it. As such, it requires higher solids content in the feed in order to obtain good and reasonable results - normally greater than 15% and up to 60% w/w. Nevertheless, the flow rate of the feed can be monitored to overcome this setback. Another possible method is to carry out pre-treatment on the feed to be used for screen scroll centrifuge, for example, by the filtration process. Particle size, thereafter, can be analysed and the selection of particular screen size can be determined. However, it increases the total operating cost. Typical operating range of particle size and feed concentration for screen scroll centrifuges are 100 – 20,000 μm and 3 – 90% mass of the solids in the feed. In general, slot and screen holes size range 40 - 200 μm with open areas from 5 - 15%. Nevertheless, recent products are claimed to be able to handle the particle size as low as 50 μm. Screens are generally metallic foil or wedge wire and more recently metallic and composite screens perforated with micro-waterjet cutting. Feed flow rate As mentioned in the previous section, feed flow rate is one of the crucial parameters to be controlled to achieve highly efficient output. Centrifuge performance is sensitive to feed flow rate. Even though increasing the feed flow rate can prevent from blocking the screens, it is mentioned that wetter solids is achieved. This is due to increase in hydraulic load on the centrifuge when higher feed rate is applied, while differential rotation speed between the cone and scroll, and retention time within dewatering zone of the basket are fixed. In addition, higher feed rate leads to a surge in the effective thickness of the bed since it is dragged down by the scroll. Basket geometry and its material The material variations for constructing and the design of main components of centrifuge such as the screen plate, helical screw and basket could actually improve the longer life term of the machine. Another important factor is the conical basket size and its angle within the centrifuge. Different basket size and angle between basket and helical screw can vary the angular speed; as a result, the quality of the product is affected. Moreover, the shape of the helical screw is also important since it optimizes the transportation of cake. A selection of typical screen scroll centrifuge with different basket sizes found in the market is presented in the following Table 1. The helical scroll and conical basket sections are commonly built at the angle of 10°, 15° and 20°. Table 1 A selection of screen scroll centrifuge sizes Advantages and limitations over competitive processes The screen scroll centrifuge has an advantage of having a driven scroll helical conveyor which gives a small differential speed relative to the conical basket. The helical conveyor is installed in the centrifuge to control the transport of the incoming feed, allowing the residence time of the solids in the basket to be increased giving enhanced process performance. Moreover, the helical conveyor and conical basket sections are designed in certain angle of 10°, 15° and 20° being common such that solid particles are dragged on the conveyor along the cone towards the discharge point. As a result, there is no formation of even solids layer but form piles of triangular section in front of the blades of the conveyor. The residence time within screen scroll centrifuge is typically about 4 to 15 seconds which is longer than normal simpler conical basket centrifuge. This permits a sufficient interaction time between wash liquids and cake. However, the presence of the conveyor causes crystals breakage and abrasion problem as well as the formation of uneven solids layer which can lead to poor washing. This can be controlled by conveyor speed. TEMA engineers, specialist in centrifuges, claims that horizontal screen scroll centrifuge can achieve higher overall recovery of fines up to 99% can be achieved, combining with very low product moisture. Furthermore, it is recommended that operating with the feed containing more than 40% solids with minimal size of 100 μm achieve the best results. The use of the screen scroll centrifuge with horizontal orientation is more economical as its capacity is 40% more tonnage than that of vertical orientation of the same size for the same energy cost. In addition, maintenance of the horizontal screen scroll centrifuge can be carried out easily since total disassembly is not needed. Nowadays, screen scroll centrifuges are equipped with CIP-cleaning system for the purpose of self-cleaning within the centrifuge. On the other hand, it has a downside of possible blockage to the screen due to the feed slurry containing small crystals besides large and normal solids crystals. Consequently, this causes the screen to become less permeable so the liquids flow over the screen rather than passing through the screen mesh. This problem, however, can be overcome by reducing the flow rate of feed. Possible heuristics to be used during design of the process The basket, helical screw, screen filter, and other parts are designed to meet up the process input and certain performance. Most of the parts are made from metal to be able to handle the separation process. The bigger the bowl could contain more input but at the same time could increase the process and residence time. The helical screw is made to be able to hold and move the particle around to be able to control the cake movement. The screen filter is made to be able to sieve the particle and the water. The cleaning process for this type of machine could be difficult compare to other separation model. The design mostly being optimized with low maintenance feature and provided with good sealing to prevent the leaking and breakup of the construction. Necessary post-treatment systems After removing liquids from the slurry to form a cake of solids in the centrifuge, further or post-treatment is required to completely dry the solids. Drying is the most common process used in the industry. Another post-treatment system is to treat the products with another stage of deliquoring process. New development The modern screen/scroll centrifuge has been modified in several ways from the original design: The addition of a long-life parts package which reduces sliding abrasion in the feed zone by having a cone cap to deflect the feed input from the top. The mechanic of the process has also been optimized to achieve better products. New screens have become available that are perforated with a micro-waterjet process. These screens offer significantly greater product recovery in combination with dryer output. This manufacturing process also allows screens to be made from extreme abrasion resistant materials such as tungsten-carbide composites for very high wear applications such as coal. Ultrafine screening quality modification allows up to 50 micrometre. The modification is made through the screen filter which could produce higher solid recovery. Other developments made on the screen scroll centrifuge are tight sealing, ability to work on do continuous mode, minimum power consumption, low friction gear, and less maintenance design. All of these modifications are made to ensure safety of the process with less power consumption and for the ease of maintenance. References Centrifuges
Screen scroll centrifuge
[ "Chemistry", "Engineering" ]
2,745
[ "Chemical equipment", "Centrifugation", "Centrifuges" ]
40,805,725
https://en.wikipedia.org/wiki/Vanadium%28II%29%20bromide
Vanadium(II) bromide is a inorganic compound with the formula VBr2. It adopts the cadmium iodide structure, featuring octahedral V(II) centers. A hexahydrate is also known. The hexahydrate undergoes partial dehydration to give the tetrahydrate. Both the hexa- and tetrahydrates are bluish in color. The compound is produced by the reduction of vanadium(III) bromide with hydrogen. Further reading Stebler, A.; Leuenberger, B.; Guedel, H. U. "Synthesis and crystal growth of A3M2X9 (A = Cs, Rb; M = Ti, V, Cr; X = Cl, Br)" Inorganic Syntheses (1989), volume 26, pages 377–85. References Bromides Metal halides Vanadium(II) compounds
Vanadium(II) bromide
[ "Chemistry" ]
197
[ "Bromides", "Inorganic compounds", "Metal halides", "Salts" ]
40,806,652
https://en.wikipedia.org/wiki/3%2C4-Dimethoxycinnamic%20acid
3,4-Dimethoxycinnamic acid is a cinnamic acid derivative isolated from coffee beans. References Carboxylic acids O-methylated phenylpropanoids
3,4-Dimethoxycinnamic acid
[ "Chemistry" ]
42
[ "Carboxylic acids", "Functional groups" ]
40,809,375
https://en.wikipedia.org/wiki/C8H10N4O3
{{DISPLAYTITLE:C8H10N4O3}} The molecular formula C8H10N4O3 may refer to: Liberine 1,3,7-Trimethyluric acid, also known as trimethyluric acid and 8-oxy-caffeine Molecular formulas
C8H10N4O3
[ "Physics", "Chemistry" ]
69
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
40,809,862
https://en.wikipedia.org/wiki/Centrifugal%20pendulum%20absorber
A centrifugal pendulum absorber is a type of tuned mass damper. It reduces the amplitude of a torsional vibration in drive trains that use a combustion engine. History The centrifugal pendulum absorber was first patented in 1937 by R. Sarazin and a different version by R. Chilton in 1938. Generally, both Sarazin and Chilton are credited with the invention. Sarazin's work was used during World War II by Pratt & Whitney for aircraft engines with increased power output. The power increase caused an increase in torsional vibrations which threatened the durability. This resulted in the Pratt & Whitney R-2800 engine that used pendulum weights attached to the crank shaft. The use of centrifugal pendulum absorbers in land vehicles did not start until later. Although internal combustion engines had always caused torsional vibrations in the drive train, the vibration amplitude was generally not high enough to affect durability or driver comfort. One application existed in tuned racing engines where torsional crank shaft vibrations could cause damage to the cam shaft or valves. In this application a centrifugal pendulum absorber is directly attached to the crank shaft. In 2010, centrifugal pendulum absorbers following the patents of Sarazin and Chilton were introduced in a BMW 320D. The reason for it was again the increase in torsional vibrations from higher power engines. In this case, the 4-cylinder diesel engine BMW N47. Unlike the previous designs, the centrifugal pendulum absorber was not attached to the combustion engine but attached to a dual mass flywheel. Function The function of a centrifugal pendulum absorber is as with any tuned mass absorbers based on an absorption principle rather than a damping principle. The distinction is significant since dampers reduce the vibration amplitude by converting the vibration energy into heat. Absorbers store the energy and return it to the vibration system at the appropriate time. Centrifugal pendulum absorbers like tuned mass absorbers are not part of the force/torque flow. The centrifugal pendulum absorber differs from the tuned mass absorber in the absorption range. It is effective for an entire order instead of a narrow frequency range. Modern Applications Internal combustion engines follow a development trend towards a reduction in the number of cylinders, increased energy output per cylinder and driving at lower engine speeds. This leads to an increased engine efficiency but causes the engine's torsional vibrations to increase. The vibrations lead to durability concerns as well as a comfort reduction for the passengers and have to be avoided through the use of harmonic dampers and absorbers. This situation moves the balance between the cost of the centrifugal pendulum absorber technology and the benefit for the drive train efficiency. The following cars use centrifugal pendulum absorbers BMW 320D Mercedes E250 Diesel Chevrolet Colorado Diesel GM Products Equipped with the 2.7L L3B Engine Chevrolet Corvette (C8) References External links Schaeffler Media Library - Centrifugal Pendulum Absorber - video depicting a dual mass flywheel with a centrifugal pendulum absorber EPI Crankshaft Torsional Absorbers - centrifugal pendulum absorber on the crankshaft of an airplane engine Engine History R-2800 - development of an engine crank shaft with centrifugal pendulum absorber Mechanical vibrations Pendulums Engine technology Mechanical engineering
Centrifugal pendulum absorber
[ "Physics", "Technology", "Engineering" ]
692
[ "Structural engineering", "Applied and interdisciplinary physics", "Engines", "Engine technology", "Mechanics", "Mechanical vibrations", "Mechanical engineering" ]
49,836,454
https://en.wikipedia.org/wiki/Kundu%20equation
The Kundu equation is a general form of integrable system that is gauge-equivalent to the mixed nonlinear Schrödinger equation. It was proposed by Anjan Kundu as with arbitrary function and the subscripts denoting partial derivatives. Equation (1) is shown to be reducible for the choice of to an integrable class of mixed nonlinear Schrödinger equation with cubic–quintic nonlinearity, given in a representative form Here are independent parameters, while Equation , more specifically equation is known as the Kundu equation. Properties and applications The Kundu equation is a completely integrable system, allowing Lax pair representation, exact solutions, and higher conserved quantity. Along with its different particular cases, this equation has been investigated for finding its exact travelling wave solutions, exact solitary wave solutions via bilinearization, and Darboux transformation together with the orbital stability for such solitary wave solutions. The Kundu equation has been applied to various physical processes such as fluid dynamics, plasma physics, and nonlinear optics. It is linked to the mixed nonlinear Schrödinger equation through a gauge transformation and is reducible to a variety of known integrable equations such as the nonlinear Schrödinger equation (NLSE), derivative NLSE, higher nonlinear derivative NLSE, Chen–Lee–Liu, Gerjikov-Vanov, and Kundu–Eckhaus equations, for different choices of the parameters. Kundu-Eckhaus equation A generalization of the nonlinear Schrödinger equation with additional quintic nonlinearity and a nonlinear dispersive term was proposed in the form which may be obtained from the Kundu Equation , when restricted to . The same equation, limited further to the particular case was introduced later as the Eckhaus equation, following which equation is presently known as the Kundu-Ekchaus equation. The Kundu-Ekchaus equation can be reduced to the nonlinear Schrödinger equation through a nonlinear transformation of the field and known therefore to be gauge equivalent integrable systems, since they are equivalent under the gauge transformation. Properties and Applications The Kundu-Ekchaus equation is associated with a Lax pair, higher conserved quantity, exact soliton solution, rogue wave solution etc. Over the years various aspects of this equation, its generalizations and link with other equations have been studied. In particular, relationship of Kundu-Ekchaus equation with the Johnson's hydrodynamic equation near criticality is established, its discretizations, reduction via Lie symmetry, complex structure via Bernoulli subequation, bright and dark soliton solutions via Bäcklund transformation and Darboux transformation with the associated rogue wave solutions, are studied. RKL equation A multi-component generalisation of the Kundu-Ekchaus equation , known as Radhakrishnan, Kundu and Laskshmanan (RKL) equation was proposed in nonlinear optics for fiber optics communication through soliton pulses in a birefringent non-Kerr medium and analysed subsequently for its exact soliton solution and other aspects in a series of papers. Quantum Aspects Though the Kundu-Ekchaus equation (3) is gauge equivalent to the nonlinear Schrödinger equation, they differ with respect to their Hamiltonian structures and field commutation relations. The Hamiltonian operator of the Kundu-Ekchaus equation quantum field model given by and defined through the bosonic field operator commutation relation , is more complicated than the well-known bosonic Hamiltonian of the quantum nonlinear Schrödinger equation. Here indicates normal ordering in bosonic operators. This model corresponds to a double -function interacting Bose gas and is difficult to solve directly. One-dimensional Anion gas However, under a nonlinear transformation of the field below: the model can be transformed to: i.e. in the same form as the quantum model of the Nonlinear Schrödinger equation (NLSE), though it differs from the NLSE in its contents, since now the fields involved are no longer bosonic operators but exhibit anion like properties. etc. where for though at the coinciding points the bosonic commutation relation still holds. In analogy with the Lieb Limiger model of function bose gas, the quantum Kundu-Ekchaus model in the N-particle sector therefore corresponds to a one-dimensional (1D) anion gas interacting via a function interaction. This model of interacting anion gas was proposed and exactly solved by the Bethe ansatz in and this basic anion model is studied further for investigating various aspects of the 1D anion gas as well as extended in different directions. References External links Painleve Analysis. Darboux Transformation. Mixed Nonlinear Schrödinger equation Gerdjikov-Ivanov equation 1D anion How fiber optics works Partial differential equations Exactly solvable models Schrödinger equation
Kundu equation
[ "Physics" ]
1,006
[ "Quantum mechanics", "Eponymous equations of physics", "Equations of physics", "Schrödinger equation" ]
49,841,592
https://en.wikipedia.org/wiki/Radiation%20detection
The following Radiological protection instruments can be used to detect and measure ionizing radiation: Ionization chambers Gaseous ionization detectors Geiger counters Photodetectors Scintillation counters Semiconductor detectors Radioactivity Radiation protection
Radiation detection
[ "Physics", "Chemistry" ]
45
[ "Radioactivity", "Nuclear physics" ]
49,844,865
https://en.wikipedia.org/wiki/Echinoderm%20and%20flatworm%20mitochondrial%20code
The echinoderm and flatworm mitochondrial code (translation table 9) is a genetic code used by the mitochondria of certain echinoderm and flatworm species. The code    AAs = FFLLSSSSYY**CCWWLLLLPPPPHHQQRRRRIIIMTTTTNNNKSSSSVVVVAAAADDEEGGGG Starts = -----------------------------------M---------------M------------  Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG  Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG  Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U). Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V) Differences from the standard code Systematic range Asterozoa (starfishes) Echinozoa (sea urchins) Rhabditophora among the Platyhelminthes See also List of genetic codes References Molecular genetics Gene expression Protein biosynthesis
Echinoderm and flatworm mitochondrial code
[ "Chemistry", "Biology" ]
543
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
60,055,007
https://en.wikipedia.org/wiki/Lyapunov%20dimension
In the mathematics of dynamical systems, the concept of Lyapunov dimension was suggested by Kaplan and Yorke for estimating the Hausdorff dimension of attractors. Further the concept has been developed and rigorously justified in a number of papers, and nowadays various different approaches to the definition of Lyapunov dimension are used. Remark that the attractors with noninteger Hausdorff dimension are called strange attractors. Since the direct numerical computation of the Hausdorff dimension of attractors is often a problem of high numerical complexity, estimations via the Lyapunov dimension became widely spread. The Lyapunov dimension was named after the Russian mathematician Aleksandr Lyapunov because of the close connection with the Lyapunov exponents. Definitions Consider a dynamical system , where is the shift operator along the solutions: , of ODE , , or difference equation , , with continuously differentiable vector-function . Then is the fundamental matrix of solutions of linearized system and denote by , singular values with respect to their algebraic multiplicity, ordered by decreasing for any and . Definition via finite-time Lyapunov dimension The concept of finite-time Lyapunov dimension and related definition of the Lyapunov dimension, developed in the works by N. Kuznetsov, is convenient for the numerical experiments where only finite time can be observed. Consider an analog of the Kaplan–Yorke formula for the finite-time Lyapunov exponents: with respect to the ordered set of finite-time Lyapunov exponents at the point . The finite-time Lyapunov dimension of dynamical system with respect to invariant set is defined as follows In this approach the use of the analog of Kaplan–Yorke formula is rigorously justified by the Douady–Oesterlè theorem, which proves that for any fixed the finite-time Lyapunov dimension for a closed bounded invariant set is an upper estimate of the Hausdorff dimension: Looking for best such estimation , the Lyapunov dimension is defined as follows: The possibilities of changing the order of the time limit and the supremum over set is discussed, e.g., in. Note that the above defined Lyapunov dimension is invariant under Lipschitz diffeomorphisms. Exact Lyapunov dimension Let the Jacobian matrix at one of the equilibria have simple real eigenvalues: , then If the supremum of local Lyapunov dimensions on the global attractor, which involves all equilibria, is achieved at an equilibrium point, then this allows one to get analytical formula of the exact Lyapunov dimension of the global attractor (see corresponding Eden’s conjecture). Definition via statistical physics approach and ergodicity Following the statistical physics approach and assuming the ergodicity the Lyapunov dimension of attractor is estimated by limit value of the local Lyapunov dimension of a typical trajectory, which belongs to the attractor. In this case and . From a practical point of view, the rigorous use of ergodic Oseledec theorem, verification that the considered trajectory is a typical trajectory, and the use of corresponding Kaplan–Yorke formula is a challenging task (see, e.g. discussions in). The exact limit values of finite-time Lyapunov exponents, if they exist and are the same for all , are called the absolute ones and used in the Kaplan–Yorke formula. Examples of the rigorous use of the ergodic theory for the computation of the Lyapunov exponents and dimension can be found in. References Dynamical systems
Lyapunov dimension
[ "Physics", "Mathematics" ]
758
[ "Mechanics", "Dynamical systems" ]
60,058,413
https://en.wikipedia.org/wiki/Pyramidobacter
Pyramidobacter is a gram-negative genus of bacteria from the family of Synergistaceae. Pyramidobacter piscolens has been isolated from the human mouth. See also List of bacterial orders List of bacteria genera References Synergistota Bacteria genera Monotypic bacteria genera
Pyramidobacter
[ "Biology" ]
60
[ "Bacteria stubs", "Bacteria" ]
60,060,582
https://en.wikipedia.org/wiki/HEMPT%203050
HEMPT 3050 is a satellite station-keeping Ion thruster, currently selected for the use on German Heinrich Hertz satellite. It's designed to be used for two roles: orbit-rising and station-keeping. To date it's been demonstrated to operate over 9000 hours. Thruster was originally planned to be used on the Hispasat AG1, based on SmallGEO bus, however it was switched to the Heinrich Hertz, which is based on the same platform. Thruster design has been in development since 2002, and thanks to unique magnetic confinement it features both: high efficiency and negligible erosion, what contributes to a long lifetime. Specifications References Ion engines
HEMPT 3050
[ "Physics", "Chemistry" ]
141
[ "Ions", "Ion engines", "Matter" ]
45,515,131
https://en.wikipedia.org/wiki/Finite%20Volume%20Community%20Ocean%20Model
The Finite Volume Community Ocean Model (FVCOM; Formerly Finite Volume Coastal Ocean Model) is a prognostic, unstructured-grid, free-surface, 3-D primitive equation coastal ocean circulation model. The model is developed primarily by researchers at the University of Massachusetts Dartmouth and Woods Hole Oceanographic Institution, and used by researchers worldwide. Originally developed for the estuarine flooding/drying process, FVCOM has been upgraded to the spherical coordinate system for basin and global applications. References External links : "About us". FVCOM website, by the University of Massachusetts Dartmouth Physical oceanography Numerical climate and weather models
Finite Volume Community Ocean Model
[ "Physics" ]
129
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
45,517,607
https://en.wikipedia.org/wiki/X%20Crucis
X Crucis is a classical Cepheid variable star in the southern constellation of Crux. X Crucis is a pulsating variable star with am extremely regular amplitude and period. Its apparent magnitude varies from 8.1 to 8.7 every 6.22 days. This type of variable is known as a Cepheid after δ Cephei, the first example to be discovered. X Crucis is a population I star and so is a classical or type I Cepheid variable, to be distinguished from older low-mass stars called type II Cepheid variables. Classical Cepheids pulsate radially so that their size varies. X Crucis pulsates in its fundamental mode and its properties indicate that it is crossing the instability strip for the third time as its evolves back to cooler temperatures. Its radius varies by about during each cycle, approximately 8% of its mean radius. At the same time its temperature varies between 5,180 and 6,029 K. The radius and temperature do not vary in sync, with the smallest size occurring as the temperature is approaching its maximum. The brightness increases rapidly to a maximum when the star is hottest, then decreases more slowly. This is one of the properties that indicate fundamental mode pulsation. References Crux Classical Cepheid variables Crucis, X J12462227-5907290 110945 G-type supergiants Durchmusterung objects
X Crucis
[ "Astronomy" ]
306
[ "Crux", "Constellations" ]
45,520,491
https://en.wikipedia.org/wiki/SDSS%20J0100%2B2802
SDSS J0100+2802 (SDSS J010013.02+280225.8) is a hyperluminous quasar located near the border of the constellations Pisces and Andromeda. It has a redshift of 6.30, which corresponds to a distance of 12.8 billion light-years from Earth and was formed 900 million years after the Big Bang. Description It appears to diverge at a velocity of 1.3782e+8 m/s. It unleashes an immense amount of power equivalent to 3 watts, which corresponds to the absolute bolometric magnitude of -31.7 which is 4.3 times the luminosity of the Sun, and 40,000 times as luminous as all of the 400 billion stars of the Milky Way galaxy combined. SDSS J0100+2802 is about four times more luminous than SDSS J1148+5251, and seven times more luminous than ULAS J1120+0641, the most distant quasar known. It harbors a black hole with mass of 12 billion solar masses (estimated according to MgII emission line correlations). This makes it one of the most massive black holes discovered so early in the universe, although it is only less than one fifth as massive as Ton 618, the most massive black hole known. The diameter of this black hole is about 70.9 billion kilometres, seven times the diameter of Pluto's orbit. See also List of quasars SDSS J1254+0846 References Pisces (constellation) Quasars Supermassive black holes SDSS objects
SDSS J0100+2802
[ "Physics", "Astronomy" ]
343
[ "Black holes", "Unsolved problems in physics", "Supermassive black holes", "Constellations", "Pisces (constellation)" ]
50,901,898
https://en.wikipedia.org/wiki/Invasion%20percolation
Invasion percolation is a mathematical model of realistic fluid distributions for slow immiscible fluid invasion in porous media, in percolation theory. It "explicitly takes into account the transport process taking place". A wetting fluid such as water takes over from a non-wetting fluid such as oil, and capillary forces are taken into account. It was introduced by Wilkinson and Willemsen (1983). References Percolation theory Fluid dynamics
Invasion percolation
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
95
[ "Statistical mechanics stubs", "Fluid dynamics stubs", "Physical phenomena", "Phase transitions", "Applied mathematics", "Chemical engineering", "Theoretical physics", "Percolation theory", "Computational physics", "Combinatorics", "Applied mathematics stubs", "Theoretical physics stubs", "P...
50,904,721
https://en.wikipedia.org/wiki/Cantilever%20enhanced%20photoacoustic%20spectroscopy
Cantilever enhanced photoacoustic spectroscopy enables the detection of small amount of trace gases which is vital in many applications. Photoacoustic spectroscopy is one of the most sensitive optical detection schemes. It is based on detecting a gas specific acoustic wave generated that originates from the absorption of light in the medium. The sensitivity of the traditional membrane microphones is limited by electrical noise and the nonlinearity of the displacement of the mechanical sensor at high optical power levels. Conventional membrane microphones can be replaced with optically measured micromechanical cantilevers to enhance sensitivity. Characteristics The novel MEMS cantilever approach detects pressure changes in a photoacoustic cell. High sensitivity is achieved by using a cantilever pressure sensor that is over hundred times more sensitive compared to a membrane, which is conventionally used in photoacoustic spectroscopy. A laser-based readout interferometer is able to accurately measure displacement from well under a picometer up to millimeters. Technology An extremely thin cantilever portion moves like a flexible door due to the pressure variations in the surrounding gas. The displacement of the cantilever is measured with an accurate interferometric readout system. This way the "breathing effect" can be avoided. The so-called breathing effect occurs in capacitive measurement principle where the other electrode damps the movement of the sensor and restricts the dynamic range. Cantilever sensor The cantilever sensor is made out of single crystal SOI-silicon with a specially developed dry-etching process that leads to a highly stable and robust component; this is why the sensor is practically totally immune to temperature and humidity variations. In addition, the sensor does not suffer from wearing. The sensor and readout can be isolated in terms of temperature allowing heated gas cell which enables applications that require gas analysis at elevated temperatures such as chemical emissions monitoring and process control. Applications Cantilever enhanced photoacoustics measuring technology can be used e.g. in detection and analysis of gases, liquids, and solid materials in applications of research, industry, environmental, safety, and security. References Spectroscopy
Cantilever enhanced photoacoustic spectroscopy
[ "Physics", "Chemistry" ]
428
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
50,908,017
https://en.wikipedia.org/wiki/Birchfield%20v.%20North%20Dakota
Birchfield v. North Dakota, 579 U.S. 438 (2016) is a case in which the Supreme Court of the United States held that the search incident to arrest doctrine permits law enforcement to conduct warrantless breath tests but not blood tests on suspected drunk drivers. Background Birchfield was a consolidation of three cases: Birchfield v. North Dakota, Bernard v. Minnesota, and Beylund v. Levi. Birchfield was charged with violation of a North Dakota statute for refusing to submit to blood alcohol content testing; Bernard was charged with a violation of a Minnesota statute for refusing to submit to breath alcohol testing; Beylund underwent a blood alcohol test consistent with North Dakota's implied consent law and challenged the constitutionality of that law after an administrative hearing based on the test results led to the revocation of his license. Issue In Missouri v. McNeely, 569 U.S. 141 (2013), the Court held that in the absence of an argument based on facts specific to the case "the natural dissipation of alcohol from the bloodstream does not always constitute an exigency justifying the warrantless taking of a blood sample". In contrast to the court's 1966 finding in Schmerber v. California, 384 U. S. 757 held that the same exigent circumstance did exist by inferring that an officer might reasonably believe it to exist. The court "did not address any potential justification for warrantless testing of drunk driving suspects, except for the exception 'at issue in the case,' namely, the exception for exigent circumstances". The issue before the court was how the "search incident to arrest doctrine applies to breath and blood tests". Is warrantless alcohol testing incident to drunk driving arrests to determine blood alcohol content a violation of the Fourth Amendment? Decision The Court held that both breath tests and blood tests constitute a search under the Fourth Amendment. The Court then proceeded to analyze both types of tests under the search incident to arrest doctrine, weighing on the one hand "the degree to which it intrudes upon an individual’s privacy" and on the other hand "the degree to which it is needed for the promotion of legitimate governmental interests." Applied to breath tests, the Court concluded that breath tests do not implicate significant privacy concerns. Blood tests, on the other hand, are significantly more intrusive. Turning to the government's interest in the tests, the Court concluded that serves the very important function of providing an incentive to cooperate in alcohol testing. Weighing these interests, the Court concluded that requiring breath tests is constitutional; however, requiring blood tests is not, as the goal of traffic safety can be obtained by less invasive means (such as breath tests). In the majority opinion, in addressing the limits of implied consent laws, the court stated that while their "prior opinions have referred approvingly to the general concept of implied-consent laws" that "there must be a limit to the consequences to which motorists may be deemed to have consented by virtue of a decision to drive on public roads" and "that motorists could be deemed to have consented to only those conditions that are 'reasonable' in that they have a 'nexus' to the privilege of driving". The Court ruled in favor of Birchfield who was prosecuted for refusing a warrantless blood draw and ruled against Bernard who refused a warrantless breath test. Beylund, on the other hand consented to a blood test after police advised him that he was required to do. The court therefore remanded Beylund's case back to the state court "to reevaluate Beylund's consent given the partial inaccuracy of the officer's advisory." The Supreme Court of North Dakota court subsequently avoided the issue by holding that, even assuming the consent was involuntary, the Exclusionary Rule does not apply in the administrative hearing context and thus affirmed suspension of his license for testing over the prohibited level set forth in the implied consent / administrative license suspension statute. Justice Thomas's dissent Justice Clarence Thomas wrote that "the search-incident-to-arrest exception to the Fourth Amendment’s warrant requirement should apply categorically to all blood alcohol tests, including blood tests. By drawing an arbitrary line between blood tests and breath tests, the majority destabilized the law of exceptions to the warrant requirement and made the jobs of both police officers and lower courts more difficult." The Supreme Court ruled in favor of Birchfield in a 7–1 majority stating that the refusal to submit to a warrantless blood test may not be criminalized as it is a violation of the petitioner's Fourth Amendment right against unlawful searches and is protected by neither the search incident to arrest nor exigent circumstances exceptions of the Fourth Amendment's warrant requirement. The Supreme Court also stated that the same rationale applied to the decision of warrantless breath tests was not relevant based on the implication of serious privacy concerns brought about by the administration of blood tests which could be used to obtain information other than the BAC of the suspected drunk driver as well as the intrusive process used to obtain a blood sample. Justice Sotomayor's dissent Justice Sonia Sotomayor wrote that "the Fourth Amendment’s prohibition against warrantless searches should apply to breath tests unless exigent circumstances justify one in a particular case. In establishing exceptions to the warrant requirement, the Court has routinely examined whether a legitimate government interest justified the search in light of the individual’s privacy interest and whether that determination should be made based on a case-by-case analysis or a categorical rule." Justice Sotomayor argued that the administration of a warrantless breath test was not imperative to the prevention of drunk drivers as the suspected drunk driver has already been removed from the roadway and a search warrant could be obtained if necessary. The Supreme Court ruled 6–2 in favor of The State of North Dakota stating that warrantless breath tests are protected under the search incident to arrest warrant exception of the Fourth Amendment's warrant requirement and require minimal physical intrusion. The Supreme Court majority also argued that the administration of a warrantless breath tests serves the government's objectives of deterring drunk drivers as well as effectively allowing law enforcement officers to remove already present drunk drivers from the roadways. See also List of United States Supreme Court cases Lists of United States Supreme Court cases by volume List of United States Supreme Court cases by the Roberts Court 2017 University of Utah Hospital incident References Further reading External links United States Supreme Court cases United States Supreme Court cases of the Roberts Court United States Fourth Amendment case law 2016 in United States case law Alcohol law in the United States Blood tests Breathalyzer Driving under the influence Legal history of North Dakota
Birchfield v. North Dakota
[ "Chemistry" ]
1,366
[ "Blood tests", "Chemical pathology" ]
50,908,248
https://en.wikipedia.org/wiki/Godfrey%20Boyle
Godfrey Boyle (1945 – 2019) was a British author and academic who was a leading figure in the British alternative technology movement, and an authority on sustainability and renewable energy. He was the founder of Undercurrents, the pioneering magazine of ‘radical science and alternative technology’. Early life and education Boyle was born in Brentford, West London to Kevin and Phyllis Boyle. The family moved to Belfast where he was educated at St Malachy’s College. Boyle later attended Queen’s University Belfast where he studied for an electrical engineering degree but failed his final exams. While studying in Belfast, Boyle edited a student science magazine called Spectrum and pursued interests in the paranormal, alternative philosophy, libertarian and anarchist politics, and pirate radio. Career Undercurrents Moving from Belfast to London, Boyle worked as a journalist on Electronics Weekly before founding Undercurrents in 1972, having had the idea for an ‘underground’ science and technology magazine since the late 1960s, which would drawing on titles such as Oz and International Times, as well as more obscure publications. Undercurrents, also known as ‘Undies’, initially came out as collections of individually printed articles and leaflets, put together in a polythene bag to serve as a ‘common carrier’ and to which articles could be added, inspired by ideas of decentralization and networking that Boyle had become interested in. Issue 2 of Undercurrents was dedicated to energy and produced in time for the first United Nations Conference on the Human Environment in Stockholm, in summer 1972, which Boyle attended with the editorial team, including Peter Harper (credited with coining the term ‘alternative technology’), who organised a ‘People’s Technology Exhibition’ as an alternative event during the conference. After transitioning to a more conventional format with issue 5, the magazine became a success, achieving a bimonthly circulation of 7,000 copies. At the end of 1973 Boyle left his job at Electronics Weekly to focus on editing Undercurrents, and formed Undercurrents Limited to administer the magazine. Undercurrents continued to be published independently for 10 years before merging with Resurgence magazine. Radical Technology In 1975 Boyle, with Harper, co-edited Radical Technology, which contained contributions from many of those who had worked on Undercurrents and became well known for its series of ‘Visions’ illustrations by the anarchist artist Clifford Harper. In the same year Boyle published his first book as author, Living on the Sun: harnessing renewable energy for an equitable society, which became influential for its argument that industrial countries could transition away from fossil fuels and instead towards renewable energy to power their economies. Open University In 1976 Boyle was appointed as a lecturer at the Open University where he formed the Alternative Technology Group (later the Energy and Environment Research Unit), which led on teaching and research into renewable energy. Alongside his teaching duties, Boyle conducted research on wind and solar systems, including the development of innovative designs for wind turbines, and also early electric bicycles. He also edited the first three editions of Renewable Energy: Power for a Sustainable Future, which remains a leading introductory textbook on renewable energy. He was appointed a personal Chair at the Open University in 2009, in the process becoming possibly the only professor in the UK without a degree. Recognition Boyle was a Fellow of the Institution of Engineering and Technology (FIET) and of the Royal Society of Arts (FRSA). Personal life In 1973 Boyle married Sally Maloney, whom he met upon moving to London and who worked on the graphic design and layout for Undercurrents. They settled in Milton Keynes, in the Rainbow Housing Cooperative, which Boyle was involved in founding. They had two children, Holly and Katie. Boyle and Maloney divorced in 1992. In later years he lived in London and Devon with his partner, Romy Fraser. Archive Godfrey Boyle’s archive is catalogued and available at Wellcome Collection (ref no: PP/GBO). References External links Godfrey Boyle discussing Undercurrents (origins, financing, influences and role) (Architectural Association School of Architecture, 1975) Godfrey Boyle, 'Living on the Sun: How We Can Power the World on 100% Renewables' (Small is Beautiful Festival, 2012) Godfrey Boyle, 'This could be one of history's great transitions' (Delhi Sustainable Development Summit, 2013) 1945 births 2019 deaths Alumni of Queen's University Belfast People from Brentford Sustainability advocates Environmental engineers Writers from Belfast Writers from the London Borough of Hounslow
Godfrey Boyle
[ "Chemistry", "Engineering" ]
908
[ "Environmental engineers", "Environmental engineering" ]
52,340,569
https://en.wikipedia.org/wiki/Tellurium%20nitride
Tellurium nitride describes chemical compounds of Te containing N3−. Efforts have been made toward the binary nitrides but the results are inconclusive and it appears that such materials are unstable. Still unconfirmed is Te4N4, which would be an analogue of tetraselenium tetranitride (Se4N4) and tetrasulfur tetranitride (S4N4). It has long been known that ammonia reacts with tellurium tetrachloride, which is similar to the method of synthesis of S4N4. The reaction of TeCl4 with a THF solution of N(SiMe3)3 gives a well-defined tellurium nitride [Te6N8(TeCl2)4(THF)4]. See also Tellurium tetraazide (TeN12) References Tellurium compounds Nitrides Hypothetical chemical compounds
Tellurium nitride
[ "Chemistry" ]
197
[ "Theoretical chemistry", "Hypothetical chemical compounds", "Hypotheses in chemistry", "Theoretical chemistry stubs" ]
52,343,411
https://en.wikipedia.org/wiki/Container%20port%20design%20process
Container port design process is a set of correlated practices considered during container port design, aiming to transfer general business mission into detailed design documents for future construction and operation. The design process involves both conceptual design and detailed design. Funding The source of funding determines the mission and scope of the project. Choices include federal funding (subsidies), state or local funding, and private funding. American ports require subsidies from the federal government in order to keep up with advances in maritime transportation as well as the capabilities of the inland freight movement. Often, roughly 50% of the costs every year come from federal sources. The American Association of Port Authorities (AAPA) is an association that aims at ensuring and increasing federal funds to American ports. A few federal bills which provide funding for ports are Fixing America's Surface Transportation (FAST) Act: $11 billion in funding to assist in surface transportation improvements National Highway Freight Program (NHFP): at least $10 billion in funding reorganized for more efficient use in transportation improvements Most often, the State's Department of Transportation (DOT) is the largest state/local financier of public money investments. The DOTs see the ports as key elements in the systems of movement they are responsible for, such as railways and highways. Investment from private entities is critical to the creation and execution of port activities. American ports are often run by private entities in the sense that day-to-day functions are financed and managed with the primary goal of creating revenue. The municipalities of the terminals are kept up by the Port Authority, but the equipment and infrastructure required for operations are under the private entities' power. With the creation of new ports, often Public-Private Partnerships, otherwise known as 3P, are formed to bring in the upfront capital necessary for someone to take on the financial risk of operating a terminal. Container terminals are no different in this sense from other types of terminals. Cargo Cargo determines the main function, transportation mode, and related characters required for the container port. In container port design, the object cargo is an intermodal container. Containers are usually classified as 20-foot and 40-foot. 53-foot containers were introduced and used both in the US and Canada, mainly for domestic road and rail transport. Vessels The type of vessel, its dimension, and capacity determine the required capacity for a port's input capacity, which involves berth design, water-borne handling equipment selection, and requirements for both storage and land-mode capacity. The characteristics of vessels and the port characteristics: Main dimensions: length, which determines the widths and bends of the channel, size required for the terminal, and maximum number of berths; the breadth and air draft, which influences cargo-handing equipment selection, and the width of channels; draft, which determines the depth along berths. Cargo Capacity, which control over the cargo ship requirements (minimum) of storage, and can affect the loading and unloading processing, usually cranes/ship. Designed vessel Function. Whether vessel has cargo handling equipment/ how it load cargo. Usually, container vessels require external handling equipment. Vessel routine shall also get considered as the inter-modal capability requirement for import, export, and trans-ship service will be different. The selection of designed vessel shall also consider the development of the container ship. Underestimating the trend of size development of container ship will result in incapability and low sustainability. Location It should start with data collection and get finished by receiving government permits. The choice of location is considered with the philosophy of triple-bottom line and with considerations of waterside access, natural conditions, inter-modal connections, and stakeholders. For ports Promotion of the advanced development of the urban and regional economy; The requirement of vessel's maneuvering, braking, harbor navigation, berthing operations; Greenfield site for general port development, such as new quays, reclamation, or breakwater. For container terminals The availability of deep water; The environmental site conditions including oceanographic and meteorological conditions; The availability of land; Good inland transport link or intermodal connections; The soil conditions Waterside Access Waterside access is the condition of waterways in the location, which determines the expected depth, berth quantities, vessel accessibility, and effort required for the development. The access channel is a waterway that linking the basins of a port to the open sea. The importance of the location of access channels is that it determines the oceanographic factors such as wave, tidal cycle, current, wind met by the ships in the channel. Also, it needs to keep the depth of the channel and be able to accommodate the world's largest cargo vessels. For example, in order to meet the need, the Port Authority of New York and New Jersey runs the Main Navigation Channel Deepening Program, dredged 38 miles of federal channels to as deep as 50 feet by 30 years. There are several aspects, based on PIANC (1997), a designer needs to consider: Vessel's dimension and velocity; The cargo hazard level; The traffic density; The physical environmental conditions are consisting of wind, waves, currents, tidal range, as well as the hardness of the bottom surface. Natural conditions Natural conditions are classified as to whether the area selected is developed or natural. Natural condition determines whether there will be existing utilities and constraint for future terminal development. Intermodal connections An intermodal connection is a place where rail, truck, barge, and other transport methods converge. Intermodal connection for container terminal mainly consists of road and rail. The capacity of intermodal connection---docking and the handling, storage, and transfer of cargo---determines the capability of terminal cargo transportation to/from the land. Stakeholders Stakeholders as any group or individual who can affect or is affected by the achievement of the organization's objectives. (R. Edward Freeman 1984.) Stakeholder analysis is a process of systematically gathering and analyzing qualitative information to determine whose interests should be taken into account when developing and or implementing a policy or program. The purpose of a stakeholder analysis is to assess the attitudes of the stakeholders regarding the realization of a new container terminal. Stakeholders in location selection mainly consist of trade organizations, maritime groups, regional government, neighborhood societies, environment groups, and other people with direct/indirect interest related to the terminal. Selection shall involve their participation so as to avoid strong conflicts in future development and to keep terminal development adapt to the changing demand from these stakeholders. Agencies and societies involved in this process are: Port Authority Municipality Province National Government Residents Potential Operators Environmentalist Permits Permits are crucial in the designing process. Large scale development projects that have the potential for causing significant adverse environmental impacts need permits to start operation. Projects without permits will be identified as an offense. Port needs port permits to open. Environmental permits will be issued by Local Environmental Protection Agencies. Usually include three-part: water side, land side, and air emissions. Permits for ports should include a clean air permit, construction permit, discharge permit, dredge permits, and water discharge permit. Detailed design The consideration of infrastructure includes plans for deployment and construction of infrastructures to implement the functions of the terminal. The wharf at a terminal is the structure that forms the edge of the landside facility. It is made up of both the topside and the face. The face of the wharf is where equipment is mounted to allow vessels to berth. It is also designed to be within the high water levels, thus making its structures susceptible to corrosion. Water-tightness and corrosion protection are a must for any structural elements that make up the face. The topside of the wharf is what is broken down into berths. Pre-designated lengths of the wharf are separated into identified berths based on the design vessels characteristics. Container cranes operate along the wharf when vessels have berthed. Warehouses are created at container terminals to hold specific goods that are transported to the port but are not being shipped out in the same container. This style of transport is not common. However, this can be a service supplied by the terminal owners to increase imports. Those goods, when warehoused, incur additional handling and storage costs, increasing revenue as well. Maintenance is the conception that uses engineering theories and practices, risk management, and maintenance strategy to plan and implement routine maintenance of facilities and operation systems. The overall maintenance policy for port or terminal should be to maintain all of the facility assets to the extent that the level of expenditure is justified in order that the assets remain serviceable during their design life or longer and for reasons of safety and security. A typical maintenance team involves experienced personnel under the control of a qualified engineering maintenance manager and supervisory staff, and engineering inspection staff. It should meet the requirements listed in ISM (International Safety Management Code). The maintenance facilities required will include a workshop with sufficient space to work on approximately 10% of the mobile equipment and spreaders at any one time. The maintenance facilities should be located outside but close to the container yard. It is necessary to provide a Stores section within the maintenance facility that will hold necessary spare components and materials. Following are the specific requirements for maintenance. Planned preventive maintenance and statutory inspections of equipment are normally carried out during the day shift when all specialist trades are available. Outside of the day shift, minimal manning levels are normally retained to cover breakdowns and emergency repairs only. For other specialist areas such as IT and electronics, it is usual to retain specialized personnel due to the specific needs of such systems and equipment. Mechanical and electrical engineering and IT personnel will be responsible for the daily maintenance of cargo handling equipment and other aspects of the facility that require these skills and for specific IT operating systems such as the TOS. High voltage electrical cables and switchgear should be maintained by specialist contractors whilst maintenance of low and medium cables and domestic electrics can be undertaken by electrical tradesmen. A lay-down area is the space where container handling equipment places full or empty containers prior to loading onto the containers' next step in its journey to its destination. The lay-down area is composed of multiple structural layers to support the loads brought on by the equipment and cargo. The first layer, the foundation, consists of either the existing or improved subgrade of the location. To add extra strength to the foundation, the existing soils are compacted; further, Soil Improvements such as stone columns are installed, or the unsatisfactory soils are removed and a new fill soil is brought in, graded, and compacted to meet requirements. The second layer is asphalt paving. This pavement differs from the typical highway and road pavement as the loads are generally more stationary as well as much smaller in magnitude. This type of pavement contains Hydraulically Bound Materials (HBM), an ingredient used to provide higher compressive strength to the asphalt. The mixture is the first part of designing the asphalt, with the second being the thickness. Both the materials and the thickness can be calculated by following existing design guides published by engineering societies. The World Association for Waterborne Transport Infrastructure's (PIANC) Report 165-2015 can provide further guidance on container terminal pavements. The lay-down area surface is also designed for multiple functions. The pavement must drain towards a drainage system as well as have a sufficient grip to prevent skidding. Finally, the pavement is painted to show lanes for travel as well as rows to place Intermodal container containers when not in transit. Intermodal yards mainly consist of two parts, rail yards, and container storage yards. Rail yards should have access to rails, and container storage yards should have access to trucks. Container storage yards include yards for inbound containers with cargo and internal movements, yards for outbound containers with cargo, yards for trans-shipment containers, and yards for empties. The area requirements are measured in TEU ground slots (the area required for one 20-ft container) plus operating space for equipment that transfers containers to and from the yards and stack and deliver containers. Port security Port security consists of cargo security, port facility security, staff security, and maritime domain security. Port security should be worked jointly by the coast guards and custom and border protection together. Internationally, port security is governed by rules issued by the International Maritime Organization and its 2002 International Ship and Port Facility Security Code. During the design process, ports need to come up with a port security plan and implement it. The port security plan should include security survey and risk assessment, physical security and access control, information security, personnel security, maritime terrorism, drug smuggling, stowaways and alien smuggling, roles/ responsibilities/ legal authorities of port agencies, sea robbery, cargo security, and hazardous materials and intelligence. Customs facility Customs should have both base offices at the warehouse and around the gates. The office at the warehouse is mainly for detecting harmful agriculture and smuggling (drugs, dirty money). Office at gates is mainly for the reason of detecting mis-picked cargo or radiation containers. At gates, there should be radiation-detection equipment aim at detecting dangerous weapons and radiation stuff that can be used to make dirty bombs. Radiation Portal Monitors (RPMs) are passive radiation detection devices used for the screening of individuals, vehicles, cargo, or other vectors for the detection of illicit sources such as at borders or secure facilities. Portal VACIS imaging system helps trained operators see the contents of closed containers, assisting them in intercepting weapons, contraband, and other items of interest and verifying shipping manifests. Patented drive-through technology lets trucks drive through the system without stopping, providing an effective solution for high-traffic situations where lengthy manual inspection processes are impractical or undesirable. Mooring Mooring (watercraft) infrastructure at a port describes those structures that mooring lines from vessels can tie off to in order to prevent drifting along or away from the wharf face. The mooring structures are called cleat (nautical) or bollards, depending on their size and shape. Bollards are designed to handle much larger loads, and in turn, much larger vessels. Manufacturers of these items typically design the items and supply the finished design to the consultant to include in the bid documents. Cleats and bollards can be found on all different forms of structures. The common one is the wharf face of a terminal. Other locations can include dolphins, which are stand-alone structures that are off the face of the landside infrastructure. Another source can be other barges or sea vessels allowing vessels to tie off to each other. Bollards and cleats can have multiple types of mooring lines tied off to them. Bow and stern lines, found at the front and the back of the vessel, are lines designed to prevent vessels from drifting perpendicular to the berth location. Breast lines come from closer to the centerline of the vessel and span along the vessel to the mooring location to keep the vessel from drifting parallel to the berth. PIANC Report ___ can provide further details on the design of mooring structures. Bollard and cleat manufacturers can provide more details on dimensions, weights, and capacities of mooring structures Berthing Berthing of the vessels is analyzed using the design vessel's characteristics of weight, draught, and other specifications in addition to the requirements set by the terminal location such as wind speeds, direction, currents, and safe berthing velocities of the approach channel and berth. All these factors come together to determine the maximum amounts of energy that must be resisted by the terminal foundation and wharf. Multiple styles of berthing equipment have been designed in response to this requirement. Container terminals are, for the most part, directly on land, eliminating the need for berthing dolphins similar to those described in the Mooring section. Fender systems installed on the wharf face are the main facility for reducing the amount of energy the wharf structure must absorb during berthing. A fender system consists of the fender (boating) itself, the panel, and the various hardware required to anchor and stabilize the unit. Fenders are made of a grade of rubber chosen for their flexibility. The more compressible, the more energy the fender can resist. They come in multiple sizes and shapes, aimed at handling different situations. Fenders do not need to be mounted in the same location at all times. Some fenders are designed to rise and fall with water levels. Panels are large faces that connect to the fender giving more contact space for the vessel. The size also helps reduce the reaction on the vessel's hull, which is designed for a certain maximum pressure. They are often covered with a friction-reducing surface to prevent lateral forces trying to shear the fender apart. One final element of panel design is associated not just with its dimensions but with its location relative to another fender. The spacing of the fenders relative to the size of the panel must be set so that the design vessel cannot compress the fender in an angled situation and contact the wharf. Either a second fender panel must be contacted, or the fender cannot compress too much in an unsymmetrical fashion. The structural elements of a fender system must be analyzed to ensure the equilibrium and stability of each unit at all times. Chains are installed to keep the panel from putting unnecessary shear due to vessel action or weight of the panel. Also, anchors are installed into the wharf to anchor the fender to the wharf face. At these locations, the foundation of the terminal is strengthened more so than at areas of non-contact due to the larger forces imparted on the structure. Container quay crane rails Container storage yards Cargo berths Pavements Communication Communication is very important in ports because ports are areas with high risks; by good communication, people can avoid risks. In the design process, people should consider adding more base stations to ensure good quality of radio contact and video contact. The customs officers should have both radio contact and video contact with truck drivers driving through the gate to make sure they have picked up the right container. Pilots in the port should have good radio communications to guide them sailing. Port laborers should have good communication through radio with each other to avoid conflicts and risks. Equipment The consideration of equipment includes plans for procurement and construction of terminal facilities to implement the function of the terminal. Equipment involved in the detailed design includes container cranes which can be identified by mode: Rail Mounted Quay Crane (RMQC) or Ship to Shore (STS) Crane; and inter-modal container transport facilities used for storage areas such as Reach Stackers, Tractor-Trailer Units (TTUs) and Vehicles. Parameters for cranes and inter-modal cargo transport facilities considered in detailed design are quantities, size limit, power requirement, handling capacity, handling speed, cost, load to land limit, and other working environment constraints. The deployment of equipment shall be designed with a key mission to create enough cargo transportation to balance the cargo flow. Queueing theory shall be introduced to the quantity and quality of equipment required. Security clearance Some staff, such as coast guards or customs, need personal security clearances to obtain a job at ports because they need to have access to classified information. There are three levels of security clearance, confidential, secret, and top secret. A critical investigation in it is a background investigation of a face-to-face interview with officers. Sometimes staff just need to get an interim security clearance. There will also be a periodic reinvestigation every 5 years to obtain a new security clearance. The investigation will again cover key aspects of the individual's life but will start from one's previous background investigation. Labors In the ports, operating systems and personnel development are based on skills acquired through experience, which is easily undertaken in advanced industrial environments. Several agreements we should take into account: ILO code of practice --- Safety and Health in ports; (2003) ILO code of practice --- Security in ports; (2004) Safety and security Safety is the condition of a "steady state" of an organization or place. Security is the process or means, physical or human, of delaying, preventing, and otherwise protecting against external or internal defects, dangers, loss, criminals, and other individuals or actions. The main documents related to terminal operations are the ILO convention 152 (1979) and the ILO code of practice (2003) which cope with the health and safety of port labor. Logistics Logistics is the general supply chain of how resources are acquired, stored and transported to their final destination. It involves identifying prospective distributors and suppliers and determining their effectiveness and accessibility. Customs Customs and Border Protection exists to a safeguard country's borders, thereby protecting the public from dangerous people and materials while enhancing the nation's global economic competitiveness by enabling legitimate trade and travel. Customs at ports of entry have two main tasks: cargo security and protecting agriculture. Every port should have its own nation's customs. They should have base offices both at the administration building and the warehouse (check agriculture) and outside offices at both the entry gates, leaving gates. Groups of customs at the port should be consisted of Marine Interdiction Agents, Border Patrol Agents, Agriculture Specialists, Custom and Border Protection Officers, and Import Specialists. See also References Transportation engineering Urban design Maritime education
Container port design process
[ "Engineering" ]
4,275
[ "Civil engineering", "Transportation engineering", "Industrial engineering" ]
52,344,551
https://en.wikipedia.org/wiki/New%20York%20Genome%20Center
The New York Genome Center (NYGC) is an independent 501(c)(3) nonprofit academic research institution in New York, New York. It serves as a multi-institutional collaborative hub focused on the advancement of genomic science and its application to drive novel biomedical discoveries. NYGC's areas of focus include the development of computational and experimental genomic methods and disease-focused research to better understand the genetic basis of cancer, neurodegenerative disease, and neuropsychiatric disease. In 2020, the NYGC also has directed its expertise to COVID-19 genomics research. Purpose and organization The Center leverages strengths in whole genome sequencing, genomic analysis, and development of genomic tools to advance genomic discovery. Its faculty hold joint tenure-track appointments at its member institutions and lead independent research labs at the center. NYGC's scientists bring a multidisciplinary and in-depth approach to the field of genomics, conducting research in single cell genomics, gene engineering, population and evolutionary genomics, technology and methods development, statistics, computational biology and bioengineering. In 2017, co-founder Tom Maniatis was named Evnin Family Scientific Director and chief executive officer of the New York Genome Center. Founding The center was founded in November 2011 as a collaboration among eleven academic institutions to advance genome research, based on the vision of Dietrich A. Stephan and leadership from Tom Maniatis and financial support of $2.5 million from each institution as well as from visionary private philanthropists. In November 2012, the center recruited Robert B. Darnell as president and Scientific Director, where he served as CEO and Founding Director, before returning to Rockefeller University and Howard Hughes Medical Institute Investigator in 2017. NYGC formally opened in a multi-story building at 101 Avenue of the Americas. on September 19–20, 2013. The 12 founding institutions (Albert Einstein College of Medicine joined the original 11 institutions in April 2013) were: Cold Spring Harbor Laboratory (New York) Columbia University (New York) Weill Cornell Medicine (New York) Memorial Sloan Kettering Cancer Center (New York) Icahn School of Medicine at Mount Sinai (New York) New York—Presbyterian Hospital (New York) New York University (New York) Northwell Health (New York) The Jackson Laboratory (Maine) Rockefeller University (New York) Stony Brook University (New York) Albert Einstein College of Medicine (New York) Currently, the NYGC has 20 member institutions with Hackensack Meridian Health and Georgetown Lombardi Comprehensive Cancer Center joining in December 2019 as associate members. and Rutgers Cancer Institute of New Jersey joining as associate member in 2020. Funding The New York Genome Center is a 501(c)(3) nonprofit academic research institution in New York, New York. Since its inception, the center has raised over $500 million to support its genomic research, including federal and private grants and philanthropy. This includes two joint gifts from the Simons Foundation and the Carson Family Charitable Trust; $100 million in 2016 and $125 million in 2019. The New York Genome Center also receives support from its member institutions, as well as New York State, the Empire State Development Corporation, the Partnership Fund for New York City, and the New York City Economic Development Corporation. Government funding has included a $55 million grant from New York State to support genomic medicine. In 2016 it received a $40 million grant from the National Human Genome Research Institute to establish a Center for Common Disease Genomics, and is leading a collaborative, large-scale genomic sequencing program focused on advancing understanding of common diseases, including autism. Additionally, the Center and Weill Cornell Medicine received a National Cancer Institute grant to support a joint cancer genomics data center for the research and clinical interpretation of tumors, a part of the ongoing development of The Cancer Genome Atlas. The center was also awarded a $13.5 million contract in 2015 to conduct whole genome sequencing and analysis for the National Heart, Lung, and Blood Institute's TOPMed program. In 2017, New York State committed $17 million in capital improvements for the New York Genome Center to house JLABS@NYC, a life sciences incubator, which opened in summer 2018. Notable faculty Harold E. Varmus, MD | Senior Associate Core Member Michael Wigler, PhD | Senior Associate Core Member Simon Tavaré, PhD | Senior Associate Core Member Recent Publications In the last five years, NYGC scientists have published over 200 papers in leading scientific journals. For an up-to-date listing of publications, go to https://www.nygenome.org/lab-groups-overview/publications/ Notes References Bioinformatics organizations DNA sequencing Genome projects 2011 establishments in New York City Medical and health organizations based in New York City
New York Genome Center
[ "Chemistry", "Biology" ]
971
[ "Bioinformatics organizations", "Bioinformatics", "Molecular biology techniques", "DNA sequencing", "Genome projects" ]
52,346,275
https://en.wikipedia.org/wiki/C15H15NO2
{{DISPLAYTITLE:C15H15NO2}} The molecular formula C15H15NO2 (molar mass: 241.285 g/mol, exact mass: 241.1103 u) may refer to: Diphenylalanine Mefenamic acid Nafoxadol Molecular formulas
C15H15NO2
[ "Physics", "Chemistry" ]
68
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
52,347,214
https://en.wikipedia.org/wiki/Explosive%20antimony
Explosive antimony is an allotrope of the chemical element antimony that is so sensitive to shock that it explodes when scratched or subjected to sudden heating. The allotrope was first described in 1855. Chemists form the allotrope through electrolysis of a concentrated solution of antimony trichloride in hydrochloric acid, which forms an amorphous glass. This glass contains significant amounts of halogen impurity at its boundaries. When it explodes, the allotrope releases 24 calories (100 J) per gram. White fumes of antimony trichloride are produced and the elemental antimony reverts to its metallic form. References Antimony Antimony Explosive chemicals
Explosive antimony
[ "Physics", "Chemistry" ]
148
[ "Periodic table", "Properties of chemical elements", "Allotropes", "Materials", "Explosive chemicals", "Matter" ]
34,004,553
https://en.wikipedia.org/wiki/Photoelectrowetting
Photoelectrowetting is a modification of the wetting properties of a surface (typically a hydrophobic surface) using incident light. Working principle Whereas ordinary electrowetting is observed in surfaces consisting of a liquid/insulator/conductor stack, photoelectrowetting can be observed by replacing the conductor with a semiconductor to form a liquid/insulator/semiconductor stack. This has electrical and optical properties similar to the metal/insulator/semiconductor stack used in metal–oxide–semiconductor field effect transistors (MOSFETs) and charge-coupled devices (CCDs). Replacing the conductor with a semiconductor results in asymmetrical electrowetting behavior (in terms of voltage polarity), depending on the semiconductor doping type and density. Incident light above the semiconductor's band gap creates photo-induced carriers via electron-hole pair generation in the depletion region of the underlying semiconductor. This leads to a modification of the capacitance of the insulator/semiconductor stack, resulting in a modification of the contact angle of a liquid droplet resting on the surface of the stack in a continuous way which can also be non-reversible. The photoelectrowetting effect can be interpreted by a modification of the Young-Lippmann equation. The figure illustrates the principle of the photoelectrowetting effect. At zero bias (0V) the conducting droplet has a large contact angle (left image) if the insulator is hydrophobic. As the bias is increased (positive for a p-type semiconductor, negative for an n-type semiconductor) the droplet spreads out – i.e. the contact angle decreases (middle image). In the presence of light (having an energy superior to the band gap of the semiconductor) the droplet spreads out more due to the reduction of the thickness of the space charge region at the insulator/semiconductor interface (right image). Optical actuation of MEMS Photoactuation of microelectromechanical systems (MEMS) has been demonstrated using photoelectrowetting., A microcantilever is placed on top of the liquid-insulator-photoconductor junction. As light is shined on the junction, the capillary force from the droplet on the cantilever, due to the contact angle change, deflects the cantilever. This wireless actuation can be used as a substitute for complex circuit-based systems currently used for optical addressing and control of autonomous wireless sensors Droplet transport Photoelectrowetting can be used to circulate aqueous solution-based sessile droplets on a silicon wafer covered with silicon dioxide and Teflon – the latter providing a hydrophobic surface. Droplet transport is achieved by focusing a laser at the leading edge of the droplet. Droplet speeds of more than 10 mm/s can be achieved without the necessity of underlying patterned electrodes. See also Optoelectrowetting Microoptoelectromechanical systems References __notoc__ External links Institut d’Electronique, de Microélectronique et de Nanotechnologie (IEMN) - Centre National de la Recherche Scientifique (CNRS) - University of Lille The Deegan Group - University of Michigan Fluid mechanics Microfluidics
Photoelectrowetting
[ "Materials_science", "Engineering" ]
690
[ "Civil engineering", "Microfluidics", "Fluid mechanics", "Microtechnology" ]
34,010,616
https://en.wikipedia.org/wiki/Transient%20liquid%20phase%20diffusion%20bonding
Transient liquid phase diffusion bonding (TLPDB) is a joining process that has been applied for bonding many metallic and ceramic systems which cannot be bonded by conventional fusion welding techniques. The bonding process produces joints with a uniform composition profile, tolerant of surface oxides and geometrical defects. The bonding technique has been exploited in a wide range of applications, from the production and repair of turbine engines in the aerospace industry, to nuclear power plants, and in making connections to integrated circuit dies as a part of the microelectronics industry. Process Transient liquid phase diffusion bonding is a process that differs from diffusion bonding. In transient liquid phase diffusion bonding, an element or alloy with a lower melting point in an interlayer diffuses into the lattice and grain boundaries of the substrates at the bonding temperature. Solid state diffusional processes lead to a change of composition at the bond interface and the dissimilar interlayer melts at a lower temperature than the parent materials. Thus, a thin layer of liquid spreads along the interface to form a joint at a lower temperature than the melting point of either of the parent materials. This method differs from brazing in that it is "isothermally solidifying". While holding the temperature above the filler metal melting point, interdiffusion shifts the composition away from eutectic, so solidification occurs at the process temperature. If sufficient interdiffusion occurs, the joint will remain solid and strong well above the original melt process temperature. This is why it is termed "transient liquid phase." The liquid solidifies before cooling. Interlayer In this technique it is necessary to select a suitable interlayer by considering its wettability, flow characteristics, high stability to prevent reactions with the base materials, and the ability to form a composition having a remelt temperature higher than the bonding temperature. The joining technique dates back to ancient times. For example, copper oxide painted as an interlayer and covered with tallow or glue to hold gold balls on to a gold article were heated in a reducing flame to form a eutectic alloy alloy at the bond area. Kinetics There are many theories on the kinetics of the bonding process but the most common theory divides the process into four main stages. The stages are: dissolution of the interlayer homogenization of the liquid isothermal solidification homogenization of the bond region References Metallurgy Welding
Transient liquid phase diffusion bonding
[ "Chemistry", "Materials_science", "Engineering" ]
483
[ "Welding", "Metallurgy", "Materials science", "Mechanical engineering", "nan" ]
55,258,733
https://en.wikipedia.org/wiki/Moving%20heat%20source%20model%20for%20thin%20plates
In heat transfer, moving heat sources is an engineering problem, particularly in welding. In the early 20th century, welding engineers began studying moving heat sources in thin plates, both empirically and theoretically. Depending on welding parameters, plate geometry and material properties, the solution takes three different forms: semi-infinite, intermediate, or thin plate. The temperature distribution and cooling rates can be determined from theoretical solutions to the problem, allowing engineers to better understand the consequences of heat sources on weldability and end item performance. Historical solutions Empirical In the 1930s metallurgists Albert Portevin and D. Seferian attempted to experimentally determine heat transfer characteristics in welding. They correlated the effects of several factors—material properties, welding process, and part dimensions—on temperature distribution, by performing oxyacetylene (gas) and covered electrode (arc) welds on plates and bars of various profiles, and multiple materials, including steel, copper, and aluminum. Their work showed that arc welding temperature gradients were steeper and cooling rates were faster than those of gas welding, which were more sensitive to material thickness than those of arc welding. In addition to process, material properties, and dimensions, the authors noted that preheat played a role in temperature distribution. G.E. Claussen and W. Sparagen did not detail other attempts to determine temperature distribution in welding, because the variety of approaches employed by the investigators resulted in data that were not comparable. They did note that the data generally revealed the effect of weld process on heat affected zone (HAZ) width, with gas welding having the widest HAZ, bare electrode arc processes the narrowest, and covered electrode falling in the middle. Theoretical Until the mid-1930s the study of the theory of heat transfer from a moving source was neglected, and temperature distribution due to moving heat sources could only be calculated approximately. In 1935, Daniel Rosenthal published the first literature applying the exact theory of heat flow from a moving source to arc welding. Rosenthal's theoretical model included several assumptions: Material properties are constant The heat source is a point source The surface of the work piece does not lose heat to the atmosphere Heat created by the Joule effect is neglected Rosenthal's solution has been shown to agree well with measured results over a wide range of parameters, although with some scattering of data. The assumption of a point, line, or plane heat source leads to inaccuracy in the vicinity of the fusion zone (where temperature is within about 20% of the melting temperature) and prohibits predicting the shape of the weld pool. Following Rosenthal, researchers were able to approximate weld pool shape by assuming a Gaussian heat source defined by the equation: where: Q : heat source, q : net power input, σ : distribution parameter. and later, other heat source distributions, such as semi-ellipsoidal and double ellipsoidal. Equations The governing equation for 3D transient heat transfer in a solid of semi-infinite dimensions, with no heat generation or surface losses, is: where: θ : temperature, x : direction parallel to weld travel, y : direction in plane and perpendicular to weld travel, z : through-thickness direction, λ : thermal conductivity, ρ : density, t : time, C : specific heat. In the case of a moving heat source applied to a plate that is so thin that temperature does not vary in the through-thickness dimension, the third term becomes zero, and the problem is two-dimensional conduction. The factors that determine whether temperature varies through the thickness include: welding speed (increases thermal gradient in through thickness direction), thermal diffusivity (decreases thermal gradient in through thickness direction), thickness (increases thermal gradient in through thickness direction). The problem is further simplified by taking advantage of the quasi-stationary state in welding, where temperature distribution from the perspective of a coordinate system that moves with the heat source is constant in time. The through thickness direction and direction perpendicular to the direction of travel are unchanged in the moving coordinate system, but the direction parallel to travel is related to the longitudinal direction of the fixed coordinate system by , where: w : moving coordinate system longitudinal direction, V : weld feed speed, x : fixed coordinate system longitudinal direction. A heat flux boundary condition attributed to Rosenthal is to consider the rate of energy (power) transferred from the arc to the plate as equal to the heat transferred outward from a cylinder with height equal to the plate thickness and an infinitely small radius at the origin: where: P : power, r : distance from point source, h : plate thickness. Another boundary condition is that temperature remains constant at distances far from the point source. Because the boundary conditions and two-dimensional differential equation can be satisfied by a solution that is dependent on distance from the source, a cylindrical coordinate system is used, with: The resulting cylindrical differential equation is: where φ is a function that will be determined later. Solution The solution of the radial "quasi-stationary" equation is the modified Bessel function of the second kind and zeroth order: Substituting φ into the equation Rosenthal assumed for the solution of the original differential equation: Finite element analysis Finite element analysis (FEA) eliminates the assumption of non-constant material properties, and allows the use of non-axisymmetric, three-dimensional heat sources such as ellipsoidal and double ellipsoidal distributions. The double ellipsoidal heat source distribution presented by John Goldak is intended to be flexible, to be used to analyze deep or shallow welds, and asymmetric geometry. The Goldak model has been shown to agree well with experimental results on thick section submerged arc weld (SAW) on steel plate, partial penetration electron beam weld (EBW) on steel plate, and gas tungsten arc weld (GTAW) on thin austenitic stainless steel plate. Applications Solution of temperature distribution and cooling rate due to a moving heat source has several practical uses in welding engineering, including: microstructure, joint strength, residual stress, cold cracking, size of HAZ, distortion. which are dependent on cooling time through temperature ranges (800C – 500C and 400C – 150C) for steels, as well as time spent at elevated temperatures. Rosenthal's solution can be manipulated to determine critical cooling rates, and select optimal preheat and interpass temperatures. Goldak's method has been shown to more accurately calculate 800C – 500C cooling rate than Rosenthal's for Goldak's SAW and EBW experiments. Goldak's method has been shown to be comparable to Gaussian and semi-ellipsoidal FEM models of Hashemzadeh's GTAW expirment. References Heat transfer
Moving heat source model for thin plates
[ "Physics", "Chemistry" ]
1,379
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics" ]
55,262,478
https://en.wikipedia.org/wiki/Landau%E2%80%93Levich%20problem
In fluid dynamics, Landau–Levich flow or the Landau–Levich problem describes the flow created by a moving plate which is pulled out of a liquid surface. Landau–Levich flow finds many applications in thin film coating. The solution to the problem was described by Lev Landau and Veniamin Levich in 1942. The problem assumes that the plate is dragged out of the liquid slowly, so that the three major forces which are in balance are viscous force, the force due to gravity, and the force due to surface tension. Problem Landau and Levich split the entire flow regime into two regimes, a lower regime and an upper regime. In the lower regime closer to the liquid surface, the flow is assumed to be static, leading to the problem of the Young–Laplace equation (a static meniscus). In the upper region far away from the liquid surface, the thickness of the liquid layer attaching to the plate is very small and also the since the velocity of the plate is small, this regime comes under the approximation of lubrication theory. The solution of these two problems are then matched using method of matched asymptotic expansions. References Flow regimes Fluid dynamics Thin film deposition Lev Landau
Landau–Levich problem
[ "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
248
[ "Thin film deposition", "Chemical engineering", "Coatings", "Thin films", "Flow regimes", "Piping", "Planes (geometry)", "Solid state engineering", "Fluid dynamics stubs", "Fluid dynamics" ]
38,026,710
https://en.wikipedia.org/wiki/Blaschke%E2%80%93Lebesgue%20theorem
In plane geometry the Blaschke–Lebesgue theorem states that the Reuleaux triangle has the least area of all curves of given constant width. In the form that every curve of a given width has area at least as large as the Reuleaux triangle, it is also known as the Blaschke–Lebesgue inequality. It is named after Wilhelm Blaschke and Henri Lebesgue, who published it separately in the early 20th century. Statement The width of a convex set in the Euclidean plane is defined as the minimum distance between any two parallel lines that enclose it. The two minimum-distance lines are both necessarily tangent lines to , on opposite sides. A curve of constant width is the boundary of a convex set with the property that, for every direction of parallel lines, the two tangent lines with that direction that are tangent to opposite sides of the curve are at a distance equal to the width. These curves include both the circle and the Reuleaux triangle, a curved triangle formed from arcs of three equal-radius circles each centered at a crossing point of the other two circles. The area enclosed by a Reuleaux triangle with width is The Blaschke–Lebesgue theorem states that this is the unique minimum possible area of a curve of constant width, and the Blaschke–Lebesgue inequality states that every convex set of width has area at least this large, with equality only when the set is bounded by a Reuleaux triangle. History The Blaschke–Lebesgue theorem was published independently in 1914 by Henri Lebesgue and in 1915 by Wilhelm Blaschke. Since their work, several other proofs have been published. In other planes The same theorem is also true in the hyperbolic plane. For any convex distance function on the plane (a distance defined as the norm of the vector difference of points, for any norm), an analogous theorem holds true, according to which the minimum-area curve of constant width is an intersection of three metric disks, each centered on a boundary point of the other two. Application The Blaschke–Lebesgue theorem has been used to provide an efficient strategy for generalizations of the game of Battleship, in which one player has a ship formed by intersecting the integer grid with a convex set and the other player, after having found one point on this ship, is aiming to determine its location using the fewest possible missed shots. For a ship with grid points, it is possible to bound the number of missed shots by . Related problems By the isoperimetric inequality, the curve of constant width in the Euclidean plane with the largest area is a circle. The perimeter of a curve of constant width is , regardless of its shape; this is Barbier's theorem. It is unknown which surfaces of constant width in three-dimensional space have the minimum volume. Bonnesen and Fenchel conjectured in 1934 that the minimizers are the two Meissner bodies obtained by rounding some of the edges of a Reuleaux tetrahedron, but this remains unproven. In 2011 Anciaux and Guilfoyle proved that the minimizer consists of pieces of spheres and tubes over curves, which is true for the Meissner bodies, thus supporting the conjecture of Bonnesen and Fenchel. References Theorems in plane geometry Geometric inequalities Area Constant width
Blaschke–Lebesgue theorem
[ "Physics", "Mathematics" ]
693
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Size", "Theorems in plane geometry", "Geometric inequalities", "Inequalities (mathematics)", "Theorems in geometry", "Wikipedia categories named after physical quantities", "Area" ]
38,029,797
https://en.wikipedia.org/wiki/HIV/AIDS%3A%20Research%20and%20Palliative%20Care
HIV/AIDS: Research and Palliative Care is a peer-reviewed medical journal covering HIV and its treatment. The journal was established in 2009 and is published by Dove Medical Press. It is abstracted and indexed in PubMed, EMBASE, EmCare, and Scopus. External links English-language journals Open access journals Dove Medical Press academic journals HIV/AIDS journals Academic journals established in 2009
HIV/AIDS: Research and Palliative Care
[ "Biology" ]
82
[ "Virus stubs", "Viruses" ]
38,034,479
https://en.wikipedia.org/wiki/Enzyme%20promiscuity
Enzyme promiscuity is the ability of an enzyme to catalyze an unexpected side reaction in addition to its main reaction. Although enzymes are remarkably specific catalysts, they can often perform side reactions in addition to their main, native catalytic activity. These wild activities are usually slow relative to the main activity and are under neutral selection. Despite ordinarily being physiologically irrelevant, under new selective pressures, these activities may confer a fitness benefit therefore prompting the evolution of the formerly promiscuous activity to become the new main activity. An example of this is the atrazine chlorohydrolase (atzA encoded) from Pseudomonas sp. ADP evolved from melamine deaminase (triA encoded), which has very small promiscuous activity toward atrazine, a man-made chemical. Introduction Enzymes are evolved to catalyze a particular reaction on a particular substrate with high catalytic efficiency (kcat/KM, cf. Michaelis–Menten kinetics). However, in addition to this main activity, they possess other activities that are generally several orders of magnitude lower, and that are not a result of evolutionary selection and therefore do not partake in the physiology of the organism. This phenomenon allows new functions to be gained as the promiscuous activity could confer a fitness benefit under a new selective pressure leading to its duplication and selection as a new main activity. Enzyme evolution Duplication and divergence Several theoretical models exist to predict the order of duplication and specialisation events, but the actual process is more intertwined and fuzzy (§ Reconstructed enzymes below). On one hand, gene amplification results in an increase in enzyme concentration, and potentially freedom from a restrictive regulation, therefore increasing the reaction rate (v) of the promiscuous activity of the enzyme making its effects more pronounced physiologically ("gene dosage effect"). On the other, enzymes may evolve an increased secondary activity with little loss to the primary activity ("robustness") with little adaptive conflict (§ Robustness and plasticity below). Robustness and plasticity A study of four distinct hydrolases (human serum paraoxonase (PON1), pseudomonads phosphotriesterase (PTE), Protein tyrosine phosphatase(PTP) and human carbonic anhydrase II (CAII)) has shown the main activity is "robust" towards change, whereas the promiscuous activities are weak and more "plastic". Specifically, selecting for an activity that is not the main activity (via directed evolution), does not initially diminish the main activity (hence its robustness), but greatly affects the non-selected activities (hence their plasticity). The phosphotriesterase (PTE) from Pseudomonas diminuta was evolved to become an arylesterase (P–O to C–O hydrolase) in eighteen rounds gaining a 109 shift in specificity (ratio of KM), however most of the change occurred in the initial rounds, where the unselected vestigial PTE activity was retained and the evolved arylesterase activity grew, while in the latter rounds there was a little trade-off for the loss of the vestigial PTE activity in favour of the arylesterase activity. This means firstly that a specialist enzyme (monofunctional) when evolved goes through a generalist stage (multifunctional), before becoming a specialist again—presumably after gene duplication according to the IAD model—and secondly that promiscuous activities are more plastic than the main activity. Reconstructed enzymes The most recent and most clear cut example of enzyme evolution is the rise of bioremediating enzymes in the past 60 years. Due to the very low number of amino acid changes, these provide an excellent model to investigate enzyme evolution in nature. However, using extant enzymes to determine how the family of enzymes evolved has the drawback that the newly evolved enzyme is compared to paralogues without knowing the true identity of the ancestor before the two genes diverged. This issue can be resolved thanks to ancestral reconstruction. First proposed in 1963 by Linus Pauling and Emile Zuckerkandl, ancestral reconstruction is the inference and synthesis of a gene from the ancestral form of a group of genes, which has had a recent revival thanks to improved inference techniques and low-cost artificial gene synthesis, resulting in several ancestral enzymes—dubbed "stemzymes" by some—to be studied. Evidence gained from reconstructed enzyme suggests that the order of the events where the novel activity is improved and the gene is duplication is not clear cut, unlike what the theoretical models of gene evolution suggest. One study showed that the ancestral gene of the immune defence protease family in mammals had a broader specificity and a higher catalytic efficiency than the contemporary family of paralogues, whereas another study showed that the ancestral steroid receptor of vertebrates was an oestrogen receptor with slight substrate ambiguity for other hormones—indicating that these probably were not synthesised at the time. This variability in ancestral specificity has not only been observed between different genes, but also within the same gene family. In light of the large number of paralogous fungal α-glucosidase genes with a number of specific maltose-like (maltose, turanose, maltotriose, maltulose and sucrose) and isomaltose-like (isomaltose and palatinose) substrates, a study reconstructed all key ancestors and found that the last common ancestor of the paralogues was mainly active on maltose-like substrates with only trace activity for isomaltose-like sugars, despite leading to a lineage of iso-maltose glucosidases and a lineage that further split into maltose glucosidases and iso-maltose glucosidases. Antithetically, the ancestor before the latter split had a more pronounced isomaltose-like glucosidase activity. Primordial metabolism Roy Jensen in 1976 theorised that primordial enzymes had to be highly promiscuous in order for metabolic networks to assemble in a patchwork fashion (hence its name, the patchwork model). This primordial catalytic versatility was later lost in favour of highly catalytic specialised orthologous enzymes. As a consequence, many central-metabolic enzymes have structural homologues that diverged before the last universal common ancestor. Distribution Promiscuity is not only a first trait, but also a very widespread property in modern genomes. A series of experiments have been conducted to assess the distribution of promiscuous enzyme activities in E. coli. In E. coli 21 out of 104 single-gene knockouts tested (from the Keio collection) could be rescued by overexpressing a noncognate E. coli protein (using a pooled set of plasmids of the ASKA collection). The mechanisms by which the noncognate ORF could rescue the knockout can be grouped into eight categories: isozyme overexpression (homologues), substrate ambiguity, transport ambiguity (scavenging), catalytic promiscuity, metabolic flux maintenance (including overexpression of the large component of a synthase in the absence of the amine transferase subunit), pathway bypass, regulatory effects and unknown mechanisms. Similarly, overexpressing the ORF collection allowed E. coli to gain over an order of magnitude in resistance in 86 out 237 toxic environment. Homology Homologues are sometimes known to display promiscuity towards each other's main reactions. This crosswise promiscuity has been most studied with members of the alkaline phosphatase superfamily, which catalyse hydrolytic reaction on the sulfate, phosphonate, monophosphate, diphosphate or triphosphate ester bond of several compounds. Despite the separation the homologues have a varying degree of reciprocal promiscuity: the differences in promiscuity are due to mechanisms involved, particularly the intermediate required. Degree of promiscuity Enzymes are generally in a state that is not only a compromise between stability and catalytic efficiency, but also for specificity and evolvability, the latter two dictating whether an enzyme is a generalist (highly evolvable due to large promiscuity, but low main activity) or a specialist (high main activity, poorly evolvable due to low promiscuity). Examples of these are enzymes for primary and secondary metabolism in plants (§ Plant secondary metabolism below). Other factors can come into play, for example the glycerophosphodiesterase (gpdQ) from Enterobacter aerogenes shows different values for its promiscuous activities depending on the two metal ions it binds, which is dictated by ion availability. In some cases promiscuity can be increased by relaxing the specificity of the active site by enlarging it with a single mutation as was the case of a D297G mutant of the E. coli L-Ala-D/L-Glu epimerase (ycjG) and E323G mutant of a pseudomonad muconate lactonizing enzyme II, allowing them to promiscuously catalyse the activity of O-succinylbenzoate synthase (menC). Conversely, promiscuity can be decreased as was the case of γ-humulene synthase (a sesquiterpene synthase) from Abies grandis that is known to produce 52 different sesquiterpenes from farnesyl diphosphate upon several mutations. Studies on enzymes with broad-specificity—not promiscuous, but conceptually close—such as mammalian trypsin and chymotrypsin, and the bifunctional isopropylmalate isomerase/homoaconitase from Pyrococcus horikoshii have revealed that active site loop mobility contributes substantially to the catalytic elasticity of the enzyme. Toxicity A promiscuous activity is a non-native activity the enzyme did not evolve to do, but arises due to an accommodating conformation of the active site. However, the main activity of the enzyme is a result not only of selection towards a high catalytic rate towards a particular substrate to produce a particular product, but also to avoid the production of toxic or unnecessary products. For example, if a tRNA synthesis loaded an incorrect amino acid onto a tRNA, the resulting peptide would have unexpectedly altered properties, consequently to enhance fidelity several additional domains are present. Similar in reaction to tRNA synthesis, the first subunit of tyrocidine synthetase (tyrA) from Bacillus brevis adenylates a molecule of phenylalanine in order to use the adenyl moiety as a handle to produce tyrocidine, a cyclic non-ribosomal peptide. When the specificity of enzyme was probed, it was found that it was highly selective against natural amino acids that were not phenylalanine, but was much more tolerant towards unnatural amino acids. Specifically, most amino acids were not catalysed, whereas the next most catalysed native amino acid was the structurally similar tyrosine, but at a thousandth as much as phenylalanine, whereas several unnatural amino acids where catalysed better than tyrosine, namely D-phenylalanine, β-cyclohexyl-L-alanine, 4-amino-L-phenylalanine and L-norleucine. One peculiar case of selected secondary activity are polymerases and restriction endonucleases, where incorrect activity is actually a result of a compromise between fidelity and evolvability. For example, for restriction endonucleases incorrect activity (star activity) is often lethal for the organism, but a small amount allows new functions to evolve against new pathogens. Plant secondary metabolism Plants produce a large number of secondary metabolites thanks to enzymes that, unlike those involved in primary metabolism, are less catalytically efficient but have a larger mechanistic elasticity (reaction types) and broader specificities. The liberal drift threshold (caused by the low selective pressure due to the small population size) allows the fitness gain endowed by one of the products to maintain the other activities even though they may be physiologically useless. Biocatalysis In biocatalysis, many reactions are sought that are absent in nature. To do this, enzymes with a small promiscuous activity towards the required reaction are identified and evolved via directed evolution or rational design. An example of a commonly evolved enzyme is ω-transaminase which can replace a ketone with a chiral amine and consequently libraries of different homologues are commercially available for rapid biomining (eg. Codexis). Another example is the possibility of using the promiscuous activities of cysteine synthase (cysM) towards nucleophiles to produce non-proteinogenic amino acids. Reaction similarity Similarity between enzymatic reactions (EC) can be calculated by using bond changes, reaction centres or substructure metrics (EC-BLAST ). Drugs and promiscuity Whereas promiscuity is mainly studied in terms of standard enzyme kinetics, drug binding and subsequent reaction is a promiscuous activity as the enzyme catalyses an inactivating reaction towards a novel substrate it did not evolve to catalyse. This could be because of the demonstration that there are only a small number of distinct ligand binding pockets in proteins. Mammalian xenobiotic metabolism, on the other hand, was evolved to have a broad specificity to oxidise, bind and eliminate foreign lipophilic compounds which may be toxic, such as plant alkaloids, so their ability to detoxify anthropogenic xenobiotics is an extension of this. See also Evolution by gene duplication Michaelis–Menten kinetics Molecular promiscuity Protein moonlighting Susumu Ohno Footnotes References Biomolecules Enzymes Metabolism Catalysis Process chemicals
Enzyme promiscuity
[ "Chemistry", "Biology" ]
2,938
[ "Catalysis", "Natural products", "Organic compounds", "Cellular processes", "Structural biology", "Biomolecules", "Biochemistry", "Chemical kinetics", "Metabolism", "Process chemicals", "Molecular biology" ]
38,036,756
https://en.wikipedia.org/wiki/100K%20Pathogen%20Genome%20Project
The 100K Pathogen Genome Project was launched in July 2012 by Bart Weimer (UC Davis) as an academic, public, and private partnership. It aims to sequence the genomes of 100,000 infectious microorganisms to create a database of bacterial genome sequences for use in public health, outbreak detection, and bacterial pathogen detection. This will speed up the diagnosis of foodborne illnesses and shorten infectious disease outbreaks. The 100K Pathogen Genome Project is a public-private collaborative project to sequence the genomes of 100,000 infectious microorganisms. The 100K Genome Project will provide a roadmap for developing tests to identify pathogens and trace their origins more quickly. Partners announced in the launch of the project were UC Davis, Agilent Technologies, and the US Food and Drug Administration, with the US Centers for Disease Control and Prevention and the US Department of Agriculture noted as collaborators. As the project has proceeded, the partnership has evolved to include or replace these founding partners. The 100K Pathogen Genome Project was selected by the IBM/Mars Food Safety Consortium for metagenomic sequences. The 100K Pathogen Genome Project is conducting high-throughput next-generation sequencing (NGS) to investigate the genomes of targeted microorganisms, with whole genome sequencing to be carried out on a small number of microorganisms for use as a reference genome. Most bacterial strains will be sequenced and assembled as draft genomes; however, the project has also produced closed genomes for a variety of enteric pathogens in the 100K bioproject. This strategy enables worldwide collaboration to identify sets of genetic biomarkers associated with important pathogen traits. This five-year microbial pathogen project will result in a free, public database containing the sequence information for each pathogen's genome. The completed gene sequences will be stored in the National Institutes of Health (NIH)'s National Center for Biotechnology Information (NCBI)'s public database. Using the database, scientists will be able to develop new methods of controlling disease-causing bacteria in the food chain. References External links 100K Pathogen Genome Project GOLD:Genomes OnLine Database Genome Project Database SUPERFAMILY The sea urchin genome database NRCPB. Biotechnology Genomics organizations Bioinformatics organizations DNA Genome databases Pathogen genomics Medical genetics Gene tests Molecular genetics
100K Pathogen Genome Project
[ "Chemistry", "Biology" ]
470
[ "Genetics techniques", "Bioinformatics organizations", "Gene tests", "Biotechnology", "Bioinformatics", "Molecular genetics", "DNA sequencing", "nan", "Molecular biology", "Genome projects", "Pathogen genomics" ]
38,037,069
https://en.wikipedia.org/wiki/C18H14O9
{{DISPLAYTITLE:C18H14O9}} The molecular formula C18H14O9 (molar mass: 374.29 g/mol, exact mass: 374.063782 u) may refer to: Connorstictic acid, a depsidone Fucophlorethol A, a phlorotannin Protocetraric acid, a depsidone Trifucol, a phlorotannin Molecular formulas
C18H14O9
[ "Physics", "Chemistry" ]
102
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
43,667,672
https://en.wikipedia.org/wiki/Ewald%E2%80%93Oseen%20extinction%20theorem
In optics, the Ewald–Oseen extinction theorem, sometimes referred to as just the extinction theorem, is a theorem that underlies the common understanding of scattering (as well as refraction, reflection, and diffraction). It is named after Paul Peter Ewald and Carl Wilhelm Oseen, who proved the theorem in crystalline and isotropic media, respectively, in 1916 and 1915. Originally, the theorem applied to scattering by an isotropic dielectric objects in free space. The scope of the theorem was greatly extended to encompass a wide variety of bianisotropic media. Overview An important part of optical physics theory is starting with microscopic physics—the behavior of atoms and electrons—and using it to derive the familiar, macroscopic, laws of optics. In particular, there is a derivation of how the refractive index works and where it comes from, starting from microscopic physics. The Ewald–Oseen extinction theorem is one part of that derivation (as is the Lorentz–Lorenz equation etc.). When light traveling in vacuum enters a transparent medium like glass, the light slows down, as described by the index of refraction. Although this fact is famous and familiar, it is actually quite strange and surprising when you think about it microscopically. After all, according to the superposition principle, the light in the glass is a superposition of: The original light wave, and The light waves emitted by oscillating electrons in the glass. (Light is an oscillating electromagnetic field that pushes electrons back and forth, emitting dipole radiation.) Individually, each of these waves travels at the speed of light in vacuum, not at the (slower) speed of light in glass. Yet when the waves are added up, they surprisingly create only a wave that travels at the slower speed. The Ewald–Oseen extinction theorem says that the light emitted by the atoms has a component traveling at the speed of light in vacuum, which exactly cancels out ("extinguishes") the original light wave. Additionally, the light emitted by the atoms has a component which looks like a wave traveling at the slower speed of light in glass. Altogether, the only wave in the glass is the slow wave, consistent with what we expect from basic optics. A more complete description can be found in Classical Optics and its Applications, by Masud Mansuripur. A proof of the classical theorem can be found in Principles of Optics, by Born and Wolf., and that of its extension has been presented by Akhlesh Lakhtakia. Derivation from Maxwell's equations Introduction When an electromagnetic wave enters a dielectric medium, it excites (resonates) the material's electrons whether they are free or bound, setting them into a vibratory state with the same frequency as the wave. These electrons will in turn radiate their own electromagnetic fields as a result of their oscillation (EM fields of oscillating charges). Due to the linearity of Maxwell equations, one expects the total field at any point in space to be the sum of the original field and the field produced by oscillating electrons. This result is, however, counterintuitive to the practical wave one observes in the dielectric moving at a speed of c/n, where n is the medium index of refraction. The Ewald–Oseen extinction theorem seek to address the disconnect by demonstrating how the superposition of these two waves reproduces the familiar result of a wave that moves at a speed of c/n. Derivation The following is a derivation based on a work by Ballenegger and Weber. Let's consider a simplified situation in which a monochromatic electromagnetic wave is normally incident on a medium filling half the space in the region z>0 as shown in Figure 1. The electric field at a point in space is the sum of the electric fields due to all the various sources. In our case, we separate the fields in two categories based on their generating sources. We denote the incident field and the sum of the fields generated by the oscillating electrons in the medium The total field at any point z in space is then given by the superposition of the two contributions, To match what we already observe, has this form. However, we already know that inside the medium, z>0, we will only observe what we call the transmitted E-field which travels through the material at speed c/n. Therefore in this formalism, This to say that the radiated field cancels out the incident field and creates a transmitted field traveling within the medium at speed c/n. Using the same logic, outside the medium the radiated field produces the effect of a reflected field traveling at speed c in the opposite direction to the incident field. assume that the wavelength is much larger than the average separation of atoms so that the medium can be considered continuous. We use the usual macroscopic E and B fields and take the medium to be nonmagnetic and neutral so that Maxwell's equations read both the total electric and magnetic fields the set of Maxwell equations inside the dielectric where includes the true and polarization current induced in the material by the outside electric field. We assume a linear relationship between the current and the electric field, hence The set of Maxwell equations outside the dielectric has no current density term The two sets of Maxwell equations are coupled since the vacuum electric field appears in the current density term. For a monochromatic wave at normal incidence, the vacuum electric field has the form with . Now to solve for , we take the curl of the third equation in the first set of Maxwell equation and combine it with the fourth. We simplify the double curl in a couple of steps using Einstein summation. Hence we obtain, Then substituting by , using the fact that we obtain, Realizing that all the fields have the same time dependence , the time derivatives are straightforward and we obtain the following inhomogeneous wave equation with particular solution For the complete solution, we add to the particular solution the general solution of the homogeneous equation which is a superposition of plane waves traveling in arbitrary directions where is found from the homogeneous equation to be Note that we have taken the solution as a coherent superposition of plane waves. Because of symmetry, we expect the fields to be the same in a plane perpendicular to the axis. Hence where is a displacement perpendicular to . Since there are no boundaries in the region , we expect a wave traveling to the right. The solution to the homogeneous equation becomes, Adding this to the particular solution, we get the radiated wave inside the medium () The total field at any position is the sum of the incident and radiated fields at that position. Adding the two components inside the medium, we get the total field This wave travels inside the dielectric at speed We can simplify the above to a familiar form of the index of refraction of a linear isotropic dielectric. To do so, we remember that in a linear dielectric an applied electric field induces a polarization proportional to the electric field . When the electric field changes, the induced charges move and produces a current density given by . Since the time dependence of the electric field is , we get which implies that the conductivity Then substituting the conductivity in the equation of , gives which is a more familiar form. For the region , one imposes the condition of a wave traveling to the left. By setting the conductivity in this region , we obtain the reflected wave traveling at the speed of light. Note that the coefficients nomenclature, and , are only adopted to match what we already expect. Hertz vector approach The following is a derivation based on a work by Wangsness and a similar derivation found in chapter 20 of Zangwill's text, Modern Electrodynamics. The setup is as follows, let the infinite half-space be vacuum and the infinite half-space be a uniform, isotropic, dielectric material with electric susceptibility, The inhomogeneous electromagnetic wave equation for the electric field can be written in terms of the electric Hertz Potential, , in the Lorenz gauge as The electric field in terms of the Hertz vectors is given as but the magnetic Hertz vector is 0 since the material is assumed to be non-magnetizable and there is no external magnetic field. Therefore the electric field simplifies to In order to calculate the electric field we must first solve the inhomogeneous wave equation for . To do this, split in the homogeneous and particular solutions Linearity then allows us to write The homogeneous solution, , is the initial plane wave traveling with wave vector in the positive direction We do not need to explicitly find since we are only interested in finding the field. The particular solution, and therefore, , is found using a time dependent Green's function method on the inhomogeneous wave equation for which produces the retarded integral Since the initial electric field is polarizing the material, the polarization vector must have the same space and time dependence More detail about this assumption is discussed by Wangsness. Plugging this into the integral and expressing in terms of Cartesian coordinates produces First, consider only the integration over and and convert this to cylindrical coordinates and call Then using the substitution and so the limits become and Then introduce a convergence factor with into the integrand since it does not change the value of the integral, Then implies , hence . Therefore, Now, plugging this result back into the z-integral yields Notice that is now only a function of and not , which was expected for the given symmetry. This integration must be split into two due to the absolute value inside the integrand. The regions are and . Again, a convergence factor must be introduced to evaluate both integrals and the result is Instead of plugging directly into the expression for the electric field, several simplifications can be made. Begin with the curl of the curl vector identity, therefore, Notice that because has no dependence and is always perpendicular to . Also, notice that the second and third terms are equivalent to the inhomogeneous wave equation, therefore, Therefore, the total field is which becomes, Now focus on the field inside the dielectric. Using the fact that is complex, we may immediately write recall also that inside the dielectric we have . Then by coefficient matching we find, and The first relation quickly yields the wave vector in the dielectric in terms of the incident wave as Using this result and the definition of in the second expression yields the polarization vector in terms of the incident electric field as Both of these results can be substituted into the expression for the electric field to obtain the final expression This is exactly the result as expected. There is only one wave inside the medium and it has wave speed reduced by n. The expected reflection and transmission coefficients are also recovered. Extinction lengths and tests of special relativity The characteristic "extinction length" of a medium is the distance after which the original wave can be said to have been completely replaced. For visible light, traveling in air at sea level, this distance is approximately 1 mm. In interstellar space, the extinction length for light is 2 light years. At very high frequencies, the electrons in the medium can't "follow" the original wave into oscillation, which lets that wave travel much further: for 0.5 MeV gamma rays, the length is 19 cm of air and 0.3 mm of Lucite, and for 4.4 GeV, 1.7 m in air, and 1.4 mm in carbon. Special relativity predicts that the speed of light in vacuum is independent of the velocity of the source emitting it. This widely believed prediction has been occasionally tested using astronomical observations. For example, in a binary star system, the two stars are moving in opposite directions, and one might test the prediction by analyzing their light. (See, for instance, the De Sitter double star experiment.) Unfortunately, the extinction length of light in space nullifies the results of any such experiments using visible light, especially when taking account of the thick cloud of stationary gas surrounding such stars. However, experiments using X-rays emitted by binary pulsars, with much longer extinction length, have been successful. References Eponymous theorems of physics Scattering, absorption and radiative transfer (optics)
Ewald–Oseen extinction theorem
[ "Physics", "Chemistry" ]
2,521
[ " absorption and radiative transfer (optics)", "Equations of physics", "Eponymous theorems of physics", "Scattering", "Physics theorems" ]
43,668,436
https://en.wikipedia.org/wiki/Berkeley%20Seismological%20Laboratory
The Berkeley Seismological Laboratory (BSL) is a research lab at the Department of Geology at the University of California, Berkeley. It was created from the Berkeley Seismographic Stations, a site on the Berkeley campus where Worldwide Standard Seismographic Network instruments were first deployed in 1959. Today, BSL's mission is to "support fundamental research into all aspects of earthquakes, solid earth processes, and their effects on society". An experimental early warning system developed by BSL issued a warning 10 seconds before the 6.0 magnitude earthquake that hit the Napa region on August 24, 2014. Such a warning system could potentially give people time to take cover in the event of a quake, preventing injuries caused by falling debris, automatically stopping trains or shutting off gas lines. The system, developed in conjunction with the United States Geological Survey (USGS), the California Institute of Technology and the University of Washington, will eventually cover the entire West Coast. The system would cost $80 million in funding to run for five years in California, or $120 million for the whole West Coast. In July 2015, USGS awarded $4 million in funding to the project partners to turn the current ShakeAlert prototype into a more robust system. See also Andrew Lawson Harry O. Wood References External links History of the BSL University of California, Berkeley Seismological observatories, organisations and projects Earthquake engineering
Berkeley Seismological Laboratory
[ "Engineering" ]
285
[ "Earthquake engineering", "Civil engineering", "Structural engineering" ]
43,672,852
https://en.wikipedia.org/wiki/Normal%20form%20%28dynamical%20systems%29
In mathematics, the normal form of a dynamical system is a simplified form that can be useful in determining the system's behavior. Normal forms are often used for determining local bifurcations in a system. All systems exhibiting a certain type of bifurcation are locally (around the equilibrium) topologically equivalent to the normal form of the bifurcation. For example, the normal form of a saddle-node bifurcation is where is the bifurcation parameter. The transcritical bifurcation near can be converted to the normal form with the transformation . See also canonical form for use of the terms canonical form, normal form, or standard form more generally in mathematics. References Further reading Bifurcation theory Dynamical systems
Normal form (dynamical systems)
[ "Physics", "Mathematics" ]
150
[ "Bifurcation theory", "Mechanics", "Dynamical systems" ]
43,673,063
https://en.wikipedia.org/wiki/C18H18O7
{{DISPLAYTITLE:C18H18O7}} The molecular formula C18H18O7 (molar mass: 346.33 g/mol, exact mass: 346.1053 u) may refer to: Scillavone B Ramalic acid
C18H18O7
[ "Chemistry" ]
61
[ "Isomerism", "Set index articles on molecular formulas" ]
43,673,640
https://en.wikipedia.org/wiki/Joseph%20Kuczkowski
Joseph A. Kuczkowski is a retired Goodyear scientist, noted for successfully explaining the mechanisms of antioxidant and antiozonant function, and for commercial development of new antiozonant systems and improvement of the stability of polymeric materials. Education 1963 - BS Chemistry, Canisius College, Buffalo, New York 1966 - MS Chemistry, Canisius College, Buffalo, New York 1968 - Ph.D. Organic Chemistry, Wayne State University, Detroit, Michigan, supervised by Prof. Michael Cava 1971 - postdoctoral fellow under Prof. Adam M. Aguiar in organo-phosphorus chemistry Career 1968-1970 - U.S. Army Medical Service Corps, responsible for Clinical Chemistry Department of the 6th US Army Medical Laboratory 1971 - jointed Goodyear Tire & Rubber Company as a Senior Research Chemist 1977 - Section Head, exploratory products and processes 1982 - Section Head, Rubber Chemicals 1984 - Section Head, Rubber Chemicals & Hydroquinone 1988 - R&D Associate, Chemicals and Specialty Polymers 2001 - retired from Goodyear Kuczkowsi holds 23 US patents. Of these, 12 are products or processes in production, including: Wingstay SN and Wingstay K. Awards 2000 - Melvin Mooney Award of the Rubber Division of the American Chemical Society. 2011 - Charles Goodyear Medal of the Rubber Division of the American Chemical Society. References Living people Polymer scientists and engineers 21st-century American engineers Year of birth missing (living people) Tire industry people Goodyear Tire and Rubber Company people
Joseph Kuczkowski
[ "Chemistry", "Materials_science" ]
310
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
43,674,326
https://en.wikipedia.org/wiki/Take-off%20warning%20system
A take-off warning system or TOWS is a set of warning signals required on most commercial aircraft, designed to alert the pilots of potentially-dangerous errors in an aircraft's take-off configuration. There are numerous systems on board an aircraft that must be set in the proper configuration to allow it to take off safely. Prior to every flight, the flight officers use checklists to verify that each of the many systems is operating and has been configured correctly. Due to the inevitability of human error, even the checklist procedure can lead to failures to properly configure the aircraft. Several improper configurations can leave an aircraft completely unable to become airborne—these conditions can easily result in fatal hull loss accidents. In order to reduce this, all major nations now mandate something similar to the US requirement that on (nearly) "all airplanes with a maximum weight more than 6,000 pounds and all jets [...] a takeoff warning system must be installed". This system must meet the following requirements: (a) The system must provide to the pilots an aural warning that is automatically activated during the initial portion of the takeoff roll if the airplane is in a configuration that would not allow a safe takeoff. The warning must continue until— (1) The configuration is changed to allow safe takeoff, or (2) Action is taken by the pilot to abandon the takeoff roll. (b) The means used to activate the system must function properly for all authorized takeoff power settings and procedures and throughout the ranges of takeoff weights, altitudes, and temperatures for which certification is requested. TOWS is designed to sound a warning for numerous other dangerous errors in the take-off configuration, such as the flaps and slats not being extended when the throttles are opened while the aircraft is on the ground. The alert is typically in the form of an audible warning horn accompanied by a voice message that indicates the nature of the configuration error. See also A number of aircraft disasters due to improper configuration have happened in spite of the presence of a functional take-off warning system: Delta Air Lines Flight 1141 LAPA Flight 3142 Lufthansa Flight 540 Mandala Airlines Flight 91 Northwest Airlines Flight 255 Spanair Flight 5022 References Avionics Warning systems
Take-off warning system
[ "Technology", "Engineering" ]
455
[ "Safety engineering", "Avionics", "Measuring instruments", "Aircraft instruments", "Warning systems" ]
43,675,194
https://en.wikipedia.org/wiki/International%20Flame%20Research%20Foundation
The International Flame Research Foundation – IFRF is a non-profit research association and network created in 1948 in IJmuiden (Netherlands), established in Livorno (Italy) between 2005 and 2016 (Fondazione Internazionale per la Ricerca Sulla Combustione – ONLUS), and in Sheffield (UK) since 2017. Meredith Thring was one of the founders. The IFRF Membership Network unites some 1000 combustion researchers from 130 industrial companies and academic institutions worldwide, around a common interest in efficient and environmentally responsible industrial combustion, with a focus on flame studies. History The IFRF can be traced to a proposal written in 1948 by Meredith Thring, head of the Physics Department in the newly formed British Iron and Steel Research Association (BISRA). Entitled Proposals for the Establishment of an International Research Project on Luminous Radiation, the document resulted in the formation of the International Flame Radiation Research Committee with representatives of the steel, fuel and appliance making industries in France, Holland and England - specifically the British Iron and Steel Research Association (BISRA), the Iron and Steel Research Association of France (IRSID) and the Royal Dutch Iron and Steel Company (KNHS). Publications The IFRF is the publisher of technical reports and regular publications: The Industrial Combustion Journal () since 1999, named IFRF Combustion Journal between Sept. 1999 and Aug. 2009 (), The Monday Night Mail - MNM - () since 1999, in 1998 a few numbers of the IFRF Newsletter were also published, The Combustion Handbook () since 2001. Theses publications are freely available on-line. Events The IFRF organises events to disseminate knowledge on combustion: conferences, technical meetings (called TOTeMs), common days with other technical or scientific associations and courses: Topic Oriented Technical Meetings (TOTeM) TOTeMs are organized since 1989, once or twice a year: IFRF Conferences IFRF Conferences (formerly Members Conference) are organized approximately every two or three years: Short courses The IFRF organises short courses: Industrial Combustion training, Air Liquide, Jouy-en-Josas, France,18-19 March 2025, IFRF Hydrogen Short Course, Sheffield, UK, 2024, “Combustion & Emissions in Furnaces and Kilns – An Industrial Approach”, 16 to 18 October 2002, Villa Olmo – Como, Italy. 5th Flame Research Course, Koningshof Congress Centre, Veldhoven, The Netherlands, 1992. 4th Flame Research Course, Christal Hotel, Prague, Czechoslovakia, 9–13 September 1991. Other events The IFRF organises events with other scientific associations such as the Combustion Institute and special flame days with other national committees. Structure The IFRF is organised in 9 national committee plus the Associate Member Group (AMG) where no national committee exists. Committees American Flame Research Committee - AFRC British Flame Research Committee - BFRC Chinese Flame Research Committee - CFRC Finnish Flame Research Committee - FFRC French Flame (Comité français) - CF German Flame (Deutsche Vereinigung für Verbrennungsforschung e.V.) - DVV Italian Flame (Comitato Italiano) - CI Dutch Flame (Nederlandse Vereniging voor Vlamonderzoek) - NVV Swedish Flame Research Committee - SFRC Governance The IFRF in managed by a Council and an Executive Committee. Locations From 1948 to 2005 the IFRF facilities were located in the CORUS R&D centre at IJmuiden (Netherlands). In 2005, the research station was relocated at ENEL facilities in Livorno (Italy), the measurement programme was restarted November 27, 2006. In 2015 a relocation of the IFRF headquarters process was initiated. Leading to the designation of University of Sheffield and its PACT laboratory as the new IFRF location from 2017. See also The Combustion Institute, a network of researchers specialised in combustion mainly from academia. The European Conference on Industrial Furnaces and Boilers - INFUB, conference related to industrial combustion. References External links International scientific organizations Research institutes in Italy Scientific organizations established in 1948 Combustion
International Flame Research Foundation
[ "Chemistry" ]
840
[ "Combustion" ]
36,607,703
https://en.wikipedia.org/wiki/PyLadies
PyLadies is an international mentorship group which focuses on helping more women become active participants in the Python open-source community. It is part of the Python Software Foundation. It was started in Los Angeles in 2011. The mission of the group is to create a diverse Python community through outreach, education, conferences and social gatherings. PyLadies also provides funding for women to attend open source conferences. The aim of PyLadies is increasing the participation of women in computing. PyLadies became a multi-chapter organization with the founding of the Washington, D.C., chapter in August 2011. History The organization was created in Los Angeles in April 2011 by seven women: Audrey Roy Greenfeld, Christine Cheung, Esther Nam, Jessica Venticinque (Stanton at the time), Katharine Jarmul, Sandy Strong, and Sophia Viklund. Around 2012, the organization filed for nonprofit status. As of March 2024, PyLadies has 129 chapters. Organization PyLadies has conducted outreach events for both beginners and experienced users. PyLadies has conducted hackathons, social nights and workshops for Python enthusiasts. Each chapter is free to run themselves as they wish as long as they are focused on the goal of empowering women and other marginalized genders in tech. Women make up the majority of the group, but membership is not limited to women and the group is open to helping people who identify as other gender identities as well. In the past, PyLadies has also collaborated with other organizations, for instance R-Ladies. References External links PyLadies Website Mentorships Women in computing Free and open-source software organizations Organizations for women in science and technology Software developer communities Python (programming language)
PyLadies
[ "Technology" ]
351
[ "Organizations for women in science and technology", "Women in science and technology" ]
36,613,571
https://en.wikipedia.org/wiki/K%C3%BCpfm%C3%BCller%27s%20uncertainty%20principle
Küpfmüller's uncertainty principle by Karl Küpfmüller in the year 1924 states that the relation of the rise time of a bandlimited signal to its bandwidth is a constant. with either or Proof A bandlimited signal with fourier transform is given by the multiplication of any signal with a rectangular function of width in frequency domain: This multiplication with a rectangular function acts as a Bandlimiting filter and results in Applying the convolution theorem, we also know Since the fourier transform of a rectangular function is a sinc function and vice versa, it follows directly by definition that Now the first root is at . This is the rise time of the pulse . Since the rise time influences how fast g(t) can go from 0 to its maximum, it affects how fast the bandwidth limited signal transitions from 0 to its maximal value. We have the important finding, that the rise time is inversely related to the frequency bandwidth: the lower the rise time, the wider the frequency bandwidth needs to be. Equality is given as long as is finite. Regarding that a real signal has both positive and negative frequencies of the same frequency band, becomes , which leads to instead of See also Heisenberg's uncertainty principle Nyquist theorem References Further reading Electronic engineering 1924 in science ´
Küpfmüller's uncertainty principle
[ "Technology", "Engineering" ]
260
[ "Electrical engineering", "Electronic engineering", "Computer engineering" ]