id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
5,600,138 | https://en.wikipedia.org/wiki/Fertigation | Fertigation is the injection of fertilizers, used for soil amendments, water amendments and other water-soluble products into an irrigation system.
Chemigation, the injection of chemicals into an irrigation system, is related to fertigation. The two terms are sometimes used interchangeably however chemigation is generally a more controlled and regulated process due to the nature of the chemicals used. Chemigation often involves insecticides, herbicides, and fungicides, some of which pose health threat to humans, animals, and the environment.
Uses
Fertigation is practiced extensively in commercial agriculture and horticulture. Fertigation is also increasingly being used for landscaping as dispenser units become more reliable and easier to use.
Fertigation is used to add additional nutrients or to correct nutrient deficiencies detected in plant tissue analysis. It is usually practiced on the high-value crops such as vegetables, turf, fruit trees, and ornamentals.
Commonly used chemicals
Nitrogen is the most commonly used plant nutrient. Naturally occurring nitrogen (N2) is a diatomic molecule which makes up approximately 80% of the Earth's atmosphere. Most plants cannot directly consume diatomic nitrogen, therefore nitrogen must be contained as a component of other chemical substances which plants can consume. Commonly, anhydrous ammonia, ammonium nitrate, and urea are used as bioavailable sources of nitrogen. Other nutrients needed by plants include phosphorus and potassium. Like nitrogen, plants require these substances to live but they must be contained in other chemical substances such as monoammonium phosphate or diammonium phosphate to serve as bioavailable nutrients. A common source of potassium is muriate of potash which is chemically potassium chloride. A soil fertility analysis is used to determine which of the more stable nutrients should be used.
Fungicides are used on sod (or turf), like golf courses and sodfarms. One of the earliest was cyproconazole marketed in 1995.
Advantages
The benefits of fertigation
methods over conventional or drop-fertilizing methods include:
Increased nutrient absorption by plants.
Accurate placement of nutrient, where the water goes the nutrient goes as well.
Ability to "micro dose", feeding the plants just enough so nutrients can be absorbed and are not left to be washed down to storm water next time it rains.
Reduction of fertilizer, chemicals, and water needed.
Reduced leaching of chemicals into the water supply.
Reduced water consumption due to the plant's increased root mass's ability to trap and hold water.
Application of nutrients can be controlled at the precise time and rate necessary.
Minimized risk of the roots contracting soil borne diseases through the contaminated soil.
Reduction of soil erosion issues as the nutrients are pumped through the water drip system. Leaching is decreased often through methods used to employ fertigation.
Disadvantages
Concentration of the solution may decrease as the fertilizer dissolves, this depends on equipment selection. If poorly selected may lead to poor nutrient placement.
The water supply for fertigation is to be kept separate from the domestic water supply to avoid contamination.
Possible pressure loss in the main irrigation line.
The process is dependent on the water supply's non-restriction by drought rationing.
Methods used
Drip irrigation – Less wasteful than sprinklers. It is not only more efficient for fertilizer usage, but can also be for maximizing nutrient uptake in plants like cotton. Drip irrigation using fertigation can also increase yield and quality of fruit and flowers, especially in subsurface drip systems rather than above surface drip tape.
Sprinkler systems – Increases leaf and fruit quality.
Continuous application – Fertilizer is supplied at a constant rate.
Three-stage application – Irrigation starts without fertilizers. Fertilizers are applied later in the process once the ground is wet, and the final stage clears fertilizers out of the irrigation system.
Proportional application – Injection rate is proportional to water discharge rate.
Quantitative application – Nutrient solution is applied in a calculated amount to each irrigation block.
Other methods of application include the lateral move, the traveler gun, and solid set systems.
System design
Fertigation assists distribution of fertilizers for farmers. The simplest type of fertigation system consists of a tank with a pump, distribution pipes, capillaries, and a dripper pen.
All systems should be placed on a raised or sealed platform, not in direct contact with the earth. Each system should also be fitted with chemical spill trays.
Because of the potential risk of contamination in the potable (drinking) water supply, a backflow prevention device is required for most fertigation systems. Backflow requirements may vary greatly. Therefore, it is very important to understand the proper level of backflow prevention required by law. In the United States, the minimum backflow protection is usually determined by state regulation. Each city or town may set the level of protection required.
See also
Drip irrigation
Foliar feeding
Soil defertilisation
Sustainable agriculture
Water conservation
Fertilizer injector
References
Bibliography
Asadi, M.E., 1998. "Water and nitrogen management to reduce impact of nitrates". Proceedings of the 5th International Agricultural Engineering conference, December 7–10, Bangkok, Thailand, PP.602–616.
Asadi, M.E., Clemente, R.S.2000. "Impact of nitrogen fertilizer use on the environment". Proceedings of the 6th International Agricultural Engineering Conference, December 4–7, Bangkok, Thailand. PP.413–423.
Asadi, M.E., Clemente, R.S., Gupta, A.D., Loof, R., and Hansen, G.K. 2002. "Impacts of fertigation Via sprinkler irrigation on nitrate leaching and corn yield on an acid - sulphate soil in Thailand. Agricultural Water Management" 52(3): 197–213.
Asadi, M.E., 2004. "Optimum utilization of water and nitrogen fertilizers in sustainable agriculture". Programme and Abstracts N2004. The Third International Nitrogen Conference. October 12–16, Nanjing, China. p. 68.
Asadi, M.E., 2005. "Fertigation as an engineering system to enhance nitrogen fertilizer efficiency". Proceedings of the Second International Congress: Information Technology in Agriculture, Food and Environment, (ITAFE), October 12–14, Adana, Turkey, pp. 525–532.
Department of Natural Resources, Environment, "Fertigation systems." Web. 4 May 2009.
Hanson, Blaine R., Hopmans, Jan, Simunek, Jirka. "Effect of Fertigation Strategy on Nitrogen Availability and Nitrate Leaching using Microirrigation". HortScience 2005 40: 1096
North Carolina Department of Agriculture and Consumer Services, www.ncagr.com/fooddrug/pesticid/chemigation2003.pdf "Chemigation & Fertigation". (2003) 4 May 2009.
Neilsen, Gerry, Kappel, Frank, Neilsen, Denise. "Fertigation Method Affects Performance of `Lapins' Sweet Cherry on Gisela 5 Rootstock". HortScience 2004 39: 1716–1721
NSW department of primary industries, "Horticultural fertigation" . 2000.
Suhaimi, M. Yaseer; Mohammad, A.M.; Mahamud, S.; Khadzir, D. (July 18, 2012). "Effects of substrates on growth and yield of ginger cultivated using soilless culture", Journal of Tropical Agriculture and Food Science, Malaysian Agricultural Research and Development Institute 40(2) pp. 159 - 168. (Selangor)
Agricultural terminology
Fertilizers
Irrigation
Lawn care
Plant nutrition | Fertigation | Chemistry | 1,614 |
38,549,290 | https://en.wikipedia.org/wiki/Soler%20model | The soler model is a quantum field theory model of Dirac fermions interacting via four fermion interactions in 3 spatial and 1 time dimension. It was introduced in 1938 by Dmitri Ivanenko
and re-introduced and investigated in 1970 by Mario Soler as a toy model of self-interacting electron.
This model is described by the Lagrangian density
where is the coupling constant,
in the Feynman slash notations, .
Here , , are Dirac gamma matrices.
The corresponding equation can be written as
,
where , ,
and are the Dirac matrices.
In one dimension,
this model is known as the massive Gross–Neveu model.
Generalizations
A commonly considered generalization is
with , or even
,
where is a smooth function.
Features
Internal symmetry
Besides the unitary symmetry U(1),
in dimensions 1, 2, and 3
the equation has SU(1,1) global internal symmetry.
Renormalizability
The Soler model is renormalizable by the power counting for and in one dimension only,
and non-renormalizable for higher values of and in higher dimensions.
Solitary wave solutions
The Soler model admits solitary wave solutions
of the form
where is localized (becomes small when is large)
and is a real number.
Reduction to the massive Thirring model
In spatial dimension 2, the Soler model coincides with the massive Thirring model,
due to the relation
,
with
the relativistic scalar
and
the charge-current density.
The relation follows from the identity
,
for any .
See also
Dirac equation
Gross–Neveu model
Nonlinear Dirac equation
Thirring model
References
Quantum field theory | Soler model | Physics | 341 |
40,807,668 | https://en.wikipedia.org/wiki/Cyanoacetic%20acid | Cyanoacetic acid is an organic compound. It is a white, hygroscopic solid. The compound contains two functional groups, a nitrile (−C≡N) and a carboxylic acid. It is a precursor to cyanoacrylates, components of adhesives.
Preparation and reactions
Cyanoacetic acid is prepared by treatment of chloroacetate salts with sodium cyanide followed by acidification. Electrosynthesis by cathodic reduction of carbon dioxide and anodic oxidation of acetonitrile also affords cyanoacetic acid.
Cyanoacetic acid is used to do cyanoacetylation, first convenient method described by J. Slätt.
It is about 1000x more acidic than acetic acid, with a pKa of 2.5. Ka=2.8x10^-3
Upon heating at 160 °C, it undergoes decarboxylation to give acetonitrile:
Applications
In its largest scale application, cyanoacetic acid is first esterified to give ethyl cyanoacetate. Condensation of that ester with formaldehyde gives ethyl cyanoacrylate, which used as superglue. As of 2007, more than 10,000 tons of cyanoacetic acid were produced annually.
Cyanoacetic acid is a versatile intermediate in the preparation of other chemicals. it is a precursor to synthetic caffeine via the intermediacy of theophylline. It is a building block for many drugs, including dextromethorphan, amiloride, sulfadimethoxine, and allopurinol.
Safety
The LD50 (oral, rats) is 1.5 g/kg.
References
Nitriles
Carboxylic acids | Cyanoacetic acid | Chemistry | 373 |
36,460,823 | https://en.wikipedia.org/wiki/Journal%20of%20Biomedical%20Semantics | The Journal of Biomedical Semantics is a peer-reviewed open-access scientific journal that covers biomedical semantics.
History
It was established in 2010 and is published by BioMed Central. The editors-in-chief are Dietrich Rebholz-Schuhmann (University of Zurich) and Goran Nenadic (University of Manchester). The journal is abstracted and indexed in Scopus, Science Citation Index Expanded, and BIOSIS Previews.
References
External links
Biomedical informatics journals
Creative Commons Attribution-licensed journals
BioMed Central academic journals
Academic journals established in 2010
English-language journals | Journal of Biomedical Semantics | Biology | 120 |
11,512,238 | https://en.wikipedia.org/wiki/Erysiphe%20brunneopunctata | Erysiphe brunneopunctata is a plant pathogen that causes powdery mildew on monkey flower.
References
Fungal plant pathogens and diseases
Eudicot diseases
brunneopunctata
Fungi described in 1984
Fungus species | Erysiphe brunneopunctata | Biology | 51 |
36,212,148 | https://en.wikipedia.org/wiki/%28589683%29%202010%20RF43 | (provisional designation ) is a large trans-Neptunian object orbiting in the scattered disc in the outermost regions of the Solar System. The object was discovered on 9 September 2010, by American astronomers David Rabinowitz, Megan Schwamb and Suzanne Tourtellotte at ESO's La Silla Observatory in northern Chile.
Orbit and classification
orbits the Sun at a distance of 37.5–61.9 AU once every 350 years and 4 months (127,948 days; semi-major axis of 49.7 AU). Its orbit has an eccentricity of 0.25 and an inclination of 31° with respect to the ecliptic. The body's observation arc begins with a precovery observation taken at Siding Spring Observatory in August 1976.
Due to its relatively high eccentricity and inclination, it is an object of the scattered disc rather than one of the regular Kuiper belt. Its perihelion of 37.5 AU is also too low to make it a detached object, which typically stay above 40 AU and never come close to the orbit of Neptune.
Numbering and naming
This minor planet was numbered by the Minor Planet Center on 20 September 2021, receiving the number in the minor planet catalog (). , it has not been named.
Physical characteristics
Diameter and albedo
Based on an absolute magnitude of 3.9, and an assumed albedo of 0.09, the Johnston archive estimates a mean diameter of approximately .
The Collaborative Asteroid Lightcurve Link assumes an albedo of 0.10 and calculates a diameter of based on an absolute magnitude of 4.1.
Rotation period
As of 2020, no rotational lightcurve of this object has been obtained from photometric observations. The object's rotation period, pole and shape remain unknown.
References
External links
MPEC 2011-U09 : 2010 RF43, Minor Planet Electronic Circular, 17 October 2011
589683
589683
589683
589683
589683
20100906 | (589683) 2010 RF43 | Physics,Astronomy | 417 |
14,606,730 | https://en.wikipedia.org/wiki/Cass%20criterion | The Cass criterion, also known as the Malinvaud–Cass criterion, is a central result in theory of overlapping generations models in economics. It is named after David Cass.
A major feature which sets overlapping generations models in economics apart from the standard model with a finite number of infinitely lived individuals is that the First Welfare Theorem might not hold—that is, competitive equilibria may be not be Pareto optimal.
If represents the vector of Arrow–Debreu commodity prices prevailing in period and if
then a competitive equilibrium allocation is inefficient.
References
Economics and time | Cass criterion | Physics | 120 |
60,410,202 | https://en.wikipedia.org/wiki/Sexual%20economics | Sexual economics theory is a highly controversial hypothesis found in the field of evolutionary psychology. The theory purports to relate to how male and female participants think, feel, behave and give feedback during sex or relevant sexual events. This theory states that the thinking, preferences and behavior of men and women follow the fundamental economic principles. It was proposed by psychologists Roy Baumeister and Kathleen Vohs.
Sexual economics theory enjoyed support from some in the fields of social psychology and evolutionary psychology during the 2000s, but is now regarded by many as an outmoded fringe theory based on outdated western stereotypes of gender which do not replicate cross-culturally.
Definitions
Baumeister's proposal defines sex as a marketplace deal according to the highly controversial maxim (sometimes associated with a paraphrase of Donald Symons) that sexuality is "something that women have and men want". Baumeister claims that sex is a resource that women hold overall. According to this claim, women hold on to their bodies until they receive enough motivation to give them up, such as love, commitment, time, attention, caring, loyalty, respect, happiness and money from another party. On the other side, men are the ones who offer the resources that entice women into sex.
Sexual economics is based on social exchange theory: it alleges that people are willing to give up something if they can get in exchange what they believe will benefit them more. The theory rests on the belief that typically one party is more eager to exchange resources for what the other party holds, thus causing a bargaining power imbalance. At this point, the party who is less willing to exchange what they have has a higher control in this relationship. In the example of a sexual relationship, if one side wants to have sexual intercourse less than the other, he or she can hold out until a more attractive offer is made. In the classic formulation of this theory based on western gender and sexual stereotypes, women are constructed as having a lower libido than men, and therefore are the ones who "hold out" and extract resources from men as a form of manipulation. This apparent modern and western trend is retroprojected onto previous times and non-western societies by many evolutionary psychologists, despite little to no evidence justifying such retroprojection and a great deal of evidence against it.
In the view of Mark Regnerus, the economic perspective is clear. The behavior he analyzed in the market place states that the majority rule is a political principle that often works in human society, but sex is an exception and minority rules work are applicable.
Female and male status
This theory rests on the controversial belief that sexual activity is "naturally" more desirable for males than females in human societies. In some primates, male aggression against females has the effect of controlling female sexuality for the male's reproductive advantage. Furthermore, the evolutionary perspective provides a hypothesis to help explain cross-culture variation in the frequency of male aggression against women. Variables include the protection of women by family or community, male alliances, and male strategies for protecting spouses and achieving adulterous mating and male resource control.
According to sexual economics theory, males and females are different both physically and physiologically. According to the model, men give women resources, and then women will allow sex to take place. Under the context of sex, the trade of sex and resources keeps happening (Baumeister alleges that female control of sexuality and male competition for mates are consistent traits through eras and cultures, in sharp contrast to the available ethnographic evidence), and society has acknowledged that female sexuality has more value than male sexuality. For instance, men and women in the west have different feelings about their virginity. Women are more likely to think of their virginity as a precious gift and cherish it, while men see their virginity as a shameful condition and want to get rid of it early in life. This is a culturally bound response, as in some societies virginity has little to no value and anxieties surrounding female sexuality do not apply. The sexual economics theory, however, generally fails to deal with ethnographic data which dispute its core premises, instead assuming itself to be correct based on self-report data from western societies and projecting this modern data onto previous societies and other time periods.
It is also claimed that prostitution (the exchange of sex for money or equivalent items) may be a threat to women's status because sex is mostly considered as part of an intimate relationship instead of a contract.
Society situations
Domestic violence
Some examples have been given by proponents of sexual economics theory which they allege support their claims. In a violent relationship, women are often more likely to be victimized. In one study, the results showed that jealousy is the most frequently used explanation for domestic violence for women, but for men two emotions lead to violence. The first one involves destructive thoughts (also refers to "critical inner voice") dominant like "She's trying to fool you" or "You are not man enough if you don't control her in mind and body." The other element contains a detrimental illusion (also refers to "fantasy bond"), it brings a sense that another person constituted a whole within the subject, and is essential for their happiness.
Several advances in promoting equality between sexes have been made (such as abortion law modification in some areas), but most agrarian or industrialized societies are still patriarchal societies. Men are thought of as stronger than women, especially physically. The expectation of being masculine and more powerful than women could be destructive for men by leading to violence. Therefore, proponents of this theory claim that a woman who has a violent partner might choose to offer sex to comfort and mostly distract them from abusing her. In this case, trading sex is considered as one useful way for women to escape emotional and physical abuse from men in a violent relationship.
Adultery
The punishment for adultery is different between different genders in some countries. In some cultures, adultery is considered a crime, and although punishments for adultery such as stoning are also applied to men, the vast majority of the victims are women. In some cultures, the wives' adultery can be a viable reason if the husband wants to divorce, however, husbands' adultery cannot justify divorce. This is held up as evidence for sex being considered a female resource, although the copious number of cultures which lack clear evidence of a sexual "double standard" are generally either ignored or proclaimed (typically without supporting evidence) as actually containing said "double standard". David Barash, for instance, attempted to respond to Christopher Ryan's claims about sexual egalitarianism in hunter-gatherer societies by stating that the sexual double standard is a human universal, yet offered no documentation for this claim and did not engage with the past century of primary ethnographic sources which dispute it.
Proponents of the sexual economic theory highlight countries such as the Philippines, where the law differentiates based on gender: a woman can be charged as a criminal of adultery if she has had sexual intercourse with someone other than her husband, but a man can only be charged with crimes such as concubinage, either keeping his mistress at home, or co-habitating with her, or having sexual relationships under scandalous circumstances. The existence of many alternative patterns in pre-state societies is generally not mentioned by proponents of the sexual economic theory.
Pornography
In some countries, there is a significant difference between men and women in the consumption of pornography. Some researchers have attempted to provide evidence for sexual differences in libido by pointing out that men are more likely to view pornography compared to women, and there are more male users on pornography websites than female users. Pornhub, as one of the biggest pornography websites in the world, reported of the year at the end of 2019 that there were 32% female users in 2019, while 68% of Pornhub users were male, which means the number of male users was two times more than female users worldwide. While at first blush this finding seems to promote the universality of sexual economics, those utilizing the pornography argument for higher male libido as compared to women typically neglect to mention that there is significant variation between countries, with some nations having significantly more female than male Pornhub users, as well as the fact that even in western nations, the gap is closing rapidly. They also struggle to account for the huge female overrepresentation in non-visual forms of erotica, such as erotic novels.
Human trafficking
Human trafficking is a global issue and has consistently existed for centuries, nevertheless, it entered public consciousness around the beginning of the twenty-first century. Human trafficking is the process of enslaving people and exploiting them. Sex trafficking is one of the most common types of human trafficking. According to the data derived by The United Nations Office for Drugs and Crime in 2016, 51% identified victims of human trafficking were women, children made up 28% and 21% were men. 72% of those who were exploited in the sex industry were women, while 63% of identified traffickers were men. Most victims are placed in abusive and coercive situations, but escaping is also difficult and life-threatening for them.
According to The International Labor Organization, 4.5 million people are affected by sex trafficking in the world. Sex trafficking victims are often caught by various prosecuted criminal activities such as illegal prostitution. Besides the law, problems like long-lasting problems such as disease (AIDS), malnutrition, psychological trauma, addiction to drugs, and social ostracism against this group of people should be addressed as well.
Sex bribery in the work place
Sex bribery is defined as a "form of quid pro quo harassment in a sexual relationship with the declared or implicit condition for acquiring/retaining employment or its benefit" in an employment setting. A common example of sex bribery might come down to sexual activity or sexually related behavior accompanied by a reward such as a promotion opportunity or raise in payment. In a workplace, attempted coercion of sexual activity may under the threat of two major types of punishment: negative work performance feedback/evaluation and withheld chances of promotions and raises.
According to the 2016 Personal Safety Survey, around 55% of women over 18 have experienced sexual harassment in their lifetime, including receiving indecent phone calls, texts, emails; indecent exposure; inappropriate comments about the person's body and sex life; having unwanted touching, grabbing and kissing; and exposing and distributing the person's text, pictures, and sexual videos, without the person's permission. It is a serious, widespread problem, and the person who has a sexual harassment experience can feel stressed, anxious and depressed, sometimes withdrawing socially, becoming less productive, and losing confidence and self-esteem.
The modern phenomenon of higher female as opposed to male victimization from sexual harassment is sometimes used as a justification for essentialist views of sex differences in sexuality (such as by Donald Symons who argued in his 1979 book The Evolution of Human Sexuality that "only men rape", a claim which is now universally condemned), but most social scientists (especially those writing from a third- or fourth-wave feminist framework) dispute this, pointing to institutionalized patriarchy as a more reasonable culprit than "male nature".
Sex as a female resource: theory and critique
There are several controversial cases in today's society that are relevant to sex economic theory.
Virginity Auction
A virginity auction is a controversial auction often published online. The person who tries to sell his/her virginity is often a young female, and the winning bidder will have the opportunity to be the first who have intercourse with the person. It is a controversial topic for multiple reasons, including questions over the ethics of leveraging virginity as a "prize" in a patriarchal society, as well as issues of authenticity. The person who auctions their virginity mostly do so for quick financial help. Sexual economics theorists attempt to use virginity auctions to prove that women use sex as a resource to trade financial help from men. While this would seem to indicate that such processes do occur in western culture, once again this theory faces severe criticism from anthropologists and sociologists who point to non-western counterexamples where virginity is not valued, or where men are expected to render sexual services to women in exchange for favors (such as the Kabyle Amazigh of North Africa).
Sugar baby
Sugar baby is another controversial case that exists in modern American society. A sugar baby and sugar daddy/mommy are in a beneficial relationship, which means sugar babies provide time and sexual services to please their partners, and sugar daddies/mommas give financial support back to sugar babies including helping them in student loans and also provide luxury lifestyles such as expensive items they couldn't afford on their own. According to the registered number on dating websites, sugar babies are mostly female and the number of sugar daddies is distinctly more than the number of sugar mommies. Supporters of sexual economics theory point to this discrepancy as evidence for the central theory that men "need" or desire sexual gratification more than women do, while critics note that in patriarchal societies where most wealth and access to resources are held by men, such transactional relationships are likely to develop.
Criticism: anthropology, sociology, and philosophy
The sexual economics theory has been highly controversial from the start, due to its basis in a biologically deterministic worldview and its overreliance on Roy Baumeister's theoretical framework as well as highly questionable self-report survey data. Sexual economics presupposes a highly reductionist understanding of both sexuality and economics, and (like many theories in evolutionary psychology) a retroprojection of highly culturally-bound western phenomena as some sort of "universal human norm". The basic framework that later developed into the sexual economics theory originated with neo-Darwinian anthropologists such as Donald Symons, whose claims about human sexuality and especially female sexuality are extremely controversial and widely criticized.
While writers like Donald Symons and David Buss have attempted to universalize their findings, and even purported to find cross-cultural support for theories such as "sex is something that women have and men want", their research suffers from a multitude of both philosophical and empirical weaknesses. For instance, their samples neglect to include matrilineal or matrilocal societies, where sex differences in behavior are typically either minimized or reversed. Generally speaking, most samples which indicate the finding that men are more sexually motivated than women are from societies which are historically highly patriarchal, and hold (or held until recently) a highly negative attitude towards premarital sex, especially in women. Such societies tend to be large, populous states as opposed to smaller-scale foraging and horticultural societies, which are usually more liberal towards sexuality. These studies almost always rely on quantitative self-report surveys, which are plagued with problems such as social desirability bias (maximized for women in cultures which have traditionally valued virginity), sampling issues and (in non-western cultures specifically) resistance or hostility to reveal personal information to outsiders. Cultural anthropologists working with Boasian methodology with an emphasis on participant observation and non-judgmental, emic evaluation of other societies have long warned of the dangers of reliance on quantitative methodology such as surveys (or any research method utilizing a primarily etic perspective) for adequately capturing the internal beliefs and behaviors of other cultures.
There is a sharp contrast between the assumed "universality" of male and female differences in sexual motivation reported by Buss, Symons, Baumeister and others utilizing self-report survey data, and the findings of traditional cultural anthropologists collecting data through classic methods of ethnographic fieldwork. One major issue with the core claim of sexual economics theory is that it is philosophically and epistemically a western concept to its core, to the extent that most non-western societies have typically viewed male and female sexual drives as either the same or else characterized by higher female than male libido Historical studies have outlined a specific time and philosophical background when the modern western conception of "horny" male and coy female became dominant, pointing out that prior to the 18th century these ideas were not even widely believed in the west. Since there were no neuroscientific breakthroughs which occurred between 1700 and 1800, the change in opinion (and the source of the modern western belief that men have a higher sex drive than women) was ideological, not scientific in nature, related to the construction of a bourgeois morality which relegated women to a role as moral guardians who remained at home while men went out and "sowed their wild oats" (the so-called cult of domesticity). Today, westerners continue to follow this paradigm and assume its validity a priori, despite lack of neuroscientific documentation indicating that men are biologically more hard-wired toward seeking sexual pleasure and a host of competing theories for this apparent "sex difference".
The belief that men are more promiscuous on average than women is common in the general culture and has even been taken up by some sociologists, despite the obvious mathematical problems: given a relatively similar number of males and females in a given human population, the amount of heterosexual sex promiscuity should average out to be more or less the same for both sexes. Self-report data typically finds that men report significantly larger numbers of lifetime sexual partners on average than women, which is a mathematical impossibility due to the near-equal distributions of the sexes in the population. Therefore, it is highly likely that biased and dishonest reporting is occurring, with more men inflating their number of sex partners and more women underreporting theirs. Studies using bogus pipelines in order to adjust for social desirability bias have found that women are just as likely as men to have sex with multiple partners and even to masturbate. Even if one were to steelman the sexual economics theorists by pointing out that sexual behavior may not reflect men's inborn higher sexual motivation, one would still be forced to deal with cross-cultural evidence from non-patriarchal and non-restrictive societies like the Mosuo or the Trobriand Islanders whose "sexual economies" are fundamentally different from the model presupposed by this theory.
References
Human sexuality | Sexual economics | Biology | 3,711 |
23,947,184 | https://en.wikipedia.org/wiki/Reflections%20of%20signals%20on%20conducting%20lines | A signal travelling along an electrical transmission line will be partly, or wholly, reflected back in the opposite direction when the travelling signal encounters a discontinuity in the characteristic impedance of the line, or if the far end of the line is not terminated in its characteristic impedance. This can happen, for instance, if two lengths of dissimilar transmission lines are joined.
This article is about signal reflections on electrically conducting lines. Such lines are loosely referred to as copper lines, and indeed, in telecommunications are generally made from copper, but other metals are used, notably aluminium in power lines. Although this article is limited to describing reflections on conducting lines, this is essentially the same phenomenon as optical reflections in fibre-optic lines and microwave reflections in waveguides.
Reflections cause several undesirable effects, including modifying frequency responses, causing overload power in transmitters and overvoltages on power lines. However, the reflection phenomenon can be useful in such devices as stubs and impedance transformers. The special cases of open circuit and short circuit lines are of particular relevance to stubs.
Reflections cause standing waves to be set up on the line. Conversely, standing waves are an indication that reflections are present. There is a relationship between the measures of reflection coefficient and standing wave ratio.
Specific cases
There are several approaches to understanding reflections, but the relationship of reflections to the conservation laws is particularly enlightening. A simple example is a step voltage, (where is the height of the step and is the unit step function with time ), applied to one end of a lossless line, and consider what happens when the line is terminated in various ways. The step will be propagated down the line according to the telegrapher's equation at some velocity and the incident voltage, , at some point on the line is given by
The incident current, , can be found by dividing by the characteristic impedance,
Open circuit line
The incident wave travelling down the line is not affected in any way by the open circuit at the end of the line. It cannot have any effect until the step actually reaches that point. The signal cannot have any foreknowledge of what is at the end of the line and is only affected by the local characteristics of the line. However, if the line is of length the step will arrive at the open circuit at time , at which point the current in the line is zero (by the definition of an open circuit). Since charge continues to arrive at the end of the line through the incident current, but no current is leaving the line, then conservation of electric charge requires that there must be an equal and opposite current into the end of the line. Essentially, this is Kirchhoff's current law in operation. This equal and opposite current is the reflected current, , and since
there must also be a reflected voltage, , to drive the reflected current down the line. This reflected voltage must exist by reason of conservation of energy. The source is supplying energy to the line at a rate of . None of this energy is dissipated in the line or its termination and it must go somewhere. The only available direction is back up the line. Since the reflected current is equal in magnitude to the incident current, it must also be so that
These two voltages will add to each other so that after the step has been reflected, twice the incident voltage appears across the output terminals of the line. As the reflection proceeds back up the line the reflected voltage continues to add to the incident voltage and the reflected current continues to subtract from the incident current. After a further interval of the reflected step arrives at the generator end and the condition of double voltage and zero current will pertain there also as well as all along the length of the line. If the generator is matched to the line with an impedance of the step transient will be absorbed in the generator internal impedance and there will be no further reflections.
This counter-intuitive doubling of voltage may become clearer if the circuit voltages are considered when the line is so short that it can be ignored for the purposes of analysis. The equivalent circuit of a generator matched to a load to which it is delivering a voltage can be represented as in figure 2. That is, the generator can be represented as an ideal voltage generator of twice the voltage it is to deliver and an internal impedance of .
However, if the generator is left open circuit, a voltage of appears at the generator output terminals as in figure 3. The same situation pertains if a very short transmission line is inserted between the generator and the open circuit. If, however, a longer line with a characteristic impedance of and noticeable end-to-end delay is inserted, the generator – being initially matched to the impedance of the line – will have at the output. But after an interval, a reflected transient will return from the end of the line with the "information" that the line is actually unterminated, and the voltage will become as before.
Short circuit line
The reflection from a short-circuited line can be described in similar terms to that from an open-circuited line. Just as in the open circuit case where the current must be zero at the end of the line, in the short circuit case the voltage must be zero since there can be no volts across a short circuit. Again, all of the energy must be reflected back up the line and the reflected voltage must be equal and opposite to the incident voltage by Kirchhoff's voltage law:
and
As the reflection travels back up the line, the two voltages subtract and cancel, while the currents will add (the reflection is double negative - a negative current traveling in the reverse direction), the dual situation to the open circuit case.
Arbitrary impedance
For the general case of a line terminated in some arbitrary impedance it is usual to describe the signal as a wave traveling down the line and analyse it in the frequency domain. The impedance is consequently represented as a frequency dependant complex function.
For a line terminated in its own characteristic impedance there is no reflection. By definition, terminating in the characteristic impedance has the same effect as an infinitely long line. Any other impedance will result in a reflection. The magnitude of the reflection will be smaller than the magnitude of the incident wave if the terminating impedance is wholly or partly resistive since some of the energy of the incident wave will be absorbed in the resistance. The voltage () across the terminating impedance (), may be calculated by replacing the output of the line with an equivalent generator (figure 4) and is given by
The reflection, must be the exact amount required to make ,
The reflection coefficient, , is defined as
and substituting in the expression for ,
In general is a complex function but the above expression shows that the magnitude is limited to
when
The physical interpretation of this is that the reflection cannot be greater than the incident wave when only passive elements are involved (but see negative resistance amplifier for an example where this condition does not hold). For the special cases described above,
When both and are purely resistive then must be purely real. In the general case when is complex, this is to be interpreted as a shift in phase of the reflected wave relative to the incident wave.
Reactive termination
Another special case occurs when is purely real () and is purely imaginary (), that is, it is a reactance. In this case,
Since
then
showing that all the incident wave is reflected, and none of it is absorbed in the termination, as is to be expected from a pure reactance. There is, however, a change of phase, , in the reflection given by
Discontinuity along line
A discontinuity, or mismatch, somewhere along the length of the line results in part of the incident wave being reflected and part being transmitted onward in the second section of line as shown in figure 5. The reflection coefficient in this case is given by
In a similar manner, a transmission coefficient, , can be defined to describe the portion of the wave, , that it is transmitted in the forward direction:
Another kind of discontinuity is caused when both sections of line have an identical characteristic impedance but there is a lumped element, , at the discontinuity. For the example shown (figure 6) of a shunt lumped element,
Similar expressions can be developed for a series element, or any electrical network for that matter.
Networks
Reflections in more complex scenarios, such as found on a network of cables, can result in very complicated and long lasting waveforms on the cable. Even a simple overvoltage pulse entering a cable system as uncomplicated as the power wiring found in a typical private home can result in an oscillatory disturbance as the pulse is reflected to and from multiple circuit ends. These ring waves as they are known persist for far longer than the original pulse and their waveforms bears little obvious resemblance to the original disturbance, containing high frequency components in the tens of MHz range.
Standing waves
For a transmission line carrying sinusoidal waves, the phase of the reflected wave is continually changing with distance, with respect to the incident wave, as it proceeds back down the line. Because of this continuous change there are certain points on the line that the reflection will be in phase with the incident wave and the amplitude of the two waves will add. There will be other points where the two waves are in anti-phase and will consequently subtract. At these latter points the amplitude is at a minimum and they are known as nodes. If the incident wave has been totally reflected and the line is lossless, there will be complete cancellation at the nodes with zero signal present there despite the ongoing transmission of waves in both directions. The points where the waves are in phase are anti-nodes and represent a peak in amplitude. Nodes and anti-nodes alternate along the line and the combined wave amplitude varies continuously between them. The combined (incident plus reflected) wave appears to be standing still on the line and is called a standing wave.
The incident wave can be characterised in terms of the line's propagation constant , source voltage , and distance from the source , by
However, it is often more convenient to work in terms of distance from the load () and the incident voltage that has arrived there ().
The negative sign is absent because is measured in the reverse direction back up the line and the voltage is increasing closer to the source. Likewise the reflected voltage is given by
The total voltage on the line is given by
It is often convenient to express this in terms of hyperbolic functions
Similarly, the total current on the line is
The voltage nodes (current nodes are not at the same locations) and anti-nodes occur when
Because of the absolute value bars, the general case analytical solution is tiresomely complicated, but in the case of lossless lines (or lines that are short enough that the losses can be neglected) can be replaced by where is the phase change constant. The voltage equation then reduces to trigonometric functions
and the partial differential of the magnitude of this yields the condition,
Expressing in terms of wavelength, , allows to be solved in terms of :
is purely real when the termination is short circuit or open circuit, or when both and are purely resistive. In those cases the nodes and anti-nodes are given by
which solves for at
For the first point is a node, for the first point is an anti-node and thereafter they will alternate. For terminations that are not purely resistive the spacing and alternation remain the same but the whole pattern is shifted along the line by a constant amount related to the phase of .
Voltage standing wave ratio
The ratio of at anti-nodes and nodes is called the voltage standing wave ratio (VSWR) and is related to the reflection coefficient by
for a lossless line; the expression for the current standing wave ratio (ISWR) is identical in this case. For a lossy line the expression is only valid adjacent to the termination; VSWR asymptotically approaches unity with distance from the termination or discontinuity.
VSWR and the positions of the nodes are parameters that can be directly measured with an instrument called a slotted line. This instrument makes use of the reflection phenomenon to make many different measurements at microwave frequencies. One use is that VSWR and node position can be used to calculate the impedance of a test component terminating the slotted line. This is a useful method because measuring impedances by directly measuring voltages and currents is difficult at these frequencies.
VSWR is the conventional means of expressing the match of a radio transmitter to its antenna. It is an important parameter because power reflected back into a high power transmitter can damage its output circuitry.
Input impedance
The input impedance looking into a transmission line which is not terminated with its characteristic impedance at the far end will be something other than and will be a function of the length of the line. The value of this impedance can be found by dividing the expression for total voltage by the expression for total current given above:
Substituting , the length of the line and dividing through by reduces this to
As before, when considering just short pieces of transmission line, can be replaced by and the expression reduces to trigonometric functions
Applications
There are two structures that are of particular importance which use reflected waves to modify impedance. One is the stub which is a short length of line terminated in a short circuit (or it can be an open circuit). This produces a purely imaginary impedance at its input, that is, a reactance
By suitable choice of length, the stub can be used in place of a capacitor, an inductor or a resonant circuit.
The other structure is the quarter wave impedance transformer. As its name suggests, this is a line exactly in length. Since this will produce the inverse of its terminating impedance
Both of these structures are widely used in distributed element filters and impedance matching networks.
See also
Attenuation distortion
Antenna tuner
Fresnel reflection
Lecher lines
Time-domain reflectometry
Space cloth
Smith Chart
Citations
References
Bowick, Christopher; Ajluni, Cheryl; Blyler, John, RF Circuit Design, Newnes, 2011 .
Carr, Joseph J., Practical antenna handbook, McGraw-Hill Professional, 2001 .
Connor, F.R., Wave Transmission, Edward Arnold Ltd., 1972 .
Engen, Glenn F., Microwave circuit theory and foundations of microwave metrology, IET, 1992 .
Matthaei, G.; Young, L.; Jones, E. M. T., Microwave Filters, Impedance-Matching Networks, and Coupling Structures McGraw-Hill 1964.
Pai, S. T.; Zhang, Qi, Introduction to high power pulse technology, World Scientific, 1995 .
Standler, Ronald B., Protection of Electronic Circuits from Overvoltages, Courier Dover Publications, 2002 .
Electronic design
Signal cables
Telecommunications engineering
Distributed element circuits
Transmission lines | Reflections of signals on conducting lines | Engineering | 3,045 |
512,460 | https://en.wikipedia.org/wiki/TasWireless | TasWireless is a group of wireless networking enthusiasts in Tasmania, Australia. Between them they have set up wireless community networks in both Hobart and Launceston. The group has gone through many names, tas.air, www.tas.air.net.au, TPAN (Tasmanian Public Airwave Network) and now TasWireless.
With users from several different backgrounds, including computer networking, amateur radio, amateur television, programming, Linux/BSD server administration, antenna and satellite dish installations, and lots more, they are willing to assist with any community networks in any part of the state.
Introduction
The TasWireless site was first started in 1999. It started as a splitter group from TasLUG, the Tasmanian Linux Users Group. There was only a small number of people who were interested in wireless networking at this time, less than five each in Hobart and Launceston. A node database () for Tasmanian regions was started, the mailing list was put on line, but due to the lack of practical experience and knowledge, very little happened. The cost of Wi-Fi cards and wireless access points was also a problem.
In early 2002, a flood of cheap SkyNet Global 802.11b PC card cards flooded the market. These cards were liquidated stock and cost around A$50-60 each - the average retail price was still around A$200. A lot of these cards were shipped to the state and distributed (both by TasWireless admins and otherwise).
Wireless networks in Hobart
The predominant network in Hobart is called StarNet. This was started as a private network by a small group of amateur radio enthusiasts, around April 2002. It included around six or seven sites.
In April 2003, an operator of the TasWireless website stumbled upon one of their nodes, with SSID StarNet, and posted his find to the mailing list.
As a result, all users involved were able to share knowledge and make some minor changes to the network routing.
Another network RexNet, based in Kingston was also found; they had already been working with the StarNet group to eventually join the networks.
In mid-2003, various 802.11b wireless access points appeared on the market at low prices - Svec and Minitar brand access points were selling for around $100. This made setting up nodes easier, as the compatibility issues between various brands of PCI cradles, PC cards, and operating systems caused some problems.
By the start of March 2004, there were around 25 nodes on StarNet, reaching from Tea Tree, Otago, Rosetta, Lutana, Glenorchy, Moonah, Lindisfarne, Lenah Valley, Bellerive, Acton, Tranmere, Sandy Bay and Kingston.
Wireless networks in Launceston
Wireless networks in Launceston were a lot slower to take off than in Hobart, but is to be expected with a smaller population. Several small peer-to-peer links were tested, but no major infrastructure was rolled out.
At the end of January 2004, two groups appeared, around the same time. One group was unnamed (though was working under the TasWireless name), the other was called TasGrid.
Both groups currently have two major access points each with around 10-12 nodes being on each. Neither group appears to be very active at the present time.
Launceston Wireless has started forming some connections to hopefully create a working citywide public network
Wireless networks in the north-west
There are several people who have expressed interest in a wireless network on the north-west coast of Tasmania - however no networks currently exist.
External links
Launceston Wireless
TasWireless
StarNet
TasGrid
NodeDB maps - Tasmania
Wireless network organizations
Organisations based in Tasmania
Communications in Australia | TasWireless | Technology | 771 |
1,045,127 | https://en.wikipedia.org/wiki/Landrace | A landrace is a domesticated, locally adapted, often traditional variety of a species of animal or plant that has developed over time, through adaptation to its natural and cultural environment of agriculture and pastoralism, and due to isolation from other populations of the species. Landraces are distinct from cultivars and from standard breeds.
A significant proportion of farmers around the world grow landrace crops, and most plant landraces are associated with traditional agricultural systems. Landraces of many crops have probably been grown for millennia. Increasing reliance upon modern plant cultivars that are bred to be uniform has led to a reduction in biodiversity, because most of the genetic diversity of domesticated plant species lies in landraces and other traditionally used varieties. Some farmers using scientifically improved varieties also continue to raise landraces for agronomic reasons that include better adaptation to the local environment, lower fertilizer requirements, lower cost, and better disease resistance. Cultural and market preferences for landraces include culinary uses and product attributes such as texture, color, or ease of use.
Plant landraces have been the subject of more academic research, and the majority of academic literature about landraces is focused on botany in agriculture, not animal husbandry. Animal landraces are distinct from ancestral wild species of modern animal stock, and are also distinct from separate species or subspecies derived from the same ancestor as modern domestic stock. Not all landraces derive from wild or ancient animal stock; in some cases, notably dogs and horses, domestic animals have escaped in sufficient numbers in an area to breed feral populations that form new landraces through evolutionary pressure.
Characteristics
There are differences between authoritative sources on the specific criteria which describe landraces, although there is broad consensus about the existence and utility of the classification. Individual criteria may be weighted differently depending on a given source's focus (e.g., governmental regulation, biological sciences, agribusiness, anthropology and culture, environmental conservation, pet -keeping and -breeding, etc.). Additionally, not all cultivars agreed to be landraces exhibit every characteristic of a landrace. General features that characterize a landrace may include:
It is morphologically distinctive and identifiable (i.e., has particular and recognizable characteristics or properties), yet remains "dynamic".
It is genetically adapted to, and has a reputation for being able to withstand, the conditions of the local environment, including climate, disease and pests, even cultural practices.
It is not the product of formal (governmental, organizational, or private) breeding programs, and may lack systematic selection, development and improvement by breeders.
It is maintained and fostered less deliberately than a standardized breed, with its genetic isolation principally a matter of geography acting upon whatever animals that happened to be brought by humans to a given area.
It has a historical origin in a specific geographic area, will usually have its own local name(s), and will often be classified according to intended purpose.
Where yield (e.g. of a grain or fruit crop) can be measured, a landrace will show high stability of yield, even under adverse conditions, but a moderate yield , even under carefully managed conditions.
At the level of genetic testing, its heredity will show a degree of integrity, but still some genetic heterogeneity (i.e. genetic diversity).
Terminology
Landrace literally means 'country-breed' (German: Landrasse) and close cognates of it are found in various Germanic languages. The first known reference to the role of landraces as genetic resources was made in 1890 at an agriculture and forestry congress in Vienna, Austria. The term was first defined by Kurt von Rümker in 1908, and more clearly described in 1909 by U. J. Mansholt, who wrote that landraces have more stable characteristics and better resistance to adverse conditions, but have lower production capacity than cultivars, and are apt to change genetically when moved to another environment. H. Kiessling added in 1912 that a landrace is a mixture of phenotypic forms despite relative outward uniformity, and a great adaptability to its natural and human environment.
The word landrace entered non-academic English in the early 1930s, by way of the Danish Landrace pig, a particular breed of lop-eared swine. Many other languages do not use separate terms, like landrace and breed, but instead rely on extended description to convey such distinctions. Spanish is one such language.
Geneticist D. Phillip Sponenberg described animal breeds within these classes: the landrace, the standardized breed, modern "type" breeds, industrial strains, and feral populations. He describes landraces as an early stage of breed development, created by a combination of founder effect, isolation, and environmental pressures. Human selection for production goals is also typical of landraces.
As discussed in more detail in breed, that term itself has several definitions from various scientific and animal husbandry perspectives. Some of those senses of breed relate to the concept of landraces. A Food and Agriculture Organization of the United Nations (FAO) guideline defines landrace and landrace breed as "a breed that has largely developed through adaptation to the natural environment and traditional production system in which it has been raised." This is in contrast to its definition of a standardized breed: "a breed of livestock that was developed according to a strict programme of genetic isolation and formal artificial selection to achieve a particular phenotype."
In various domestic species (including pigs, goats, sheep and geese) some standardized breeds include "Landrace" in their names, but do not meet widely used definitions of landraces. For example, the British Landrace pig is a standardized breed, derived from earlier breeds with "Landrace" names.
Farmers' variety, usually applied to local cultivars, or seen as intermediate between a landrace and a cultivar, may also include landraces when referring to plant varieties not subjected to formal breeding programs.
Autochthonous and allochthonous landraces
A landrace native to, or produced for a long time within the agricultural system in which it is found is referred to as an autochthonous landrace, while a more recently introduced one is termed an allochthonous landrace.
Within academic agronomy, the term autochthonous landrace is sometimes used with a more technical, productivity-related definition, synthesized by A. C. Zeven from previous definitions beginning with Mansholt's: "an autochthonous landrace is a variety with a high capacity to tolerate biotic and abiotic stress, resulting in a high yield stability and an intermediate yield level under a low input agricultural system."
The terms autochthonous and allochthonous are most often applied to plants, with animals more often being referred to as indigenous or native. Examples of references in sources to long-term local landraces of livestock include constructions such as "indigenous landraces of sheep", and "Leicester Longwool sheep were bred to the native landraces of the region". Some usage of autochthonous does occur in reference to livestock, e.g. "autochthonous races of cattle such as the Asturian mountain cattle – Ratina and Casina – and Tudanca cattle."
Biodiversity and conservation
A significant proportion of farmers around the world grow landrace crops. However, as industrialized agriculture spreads, cultivars, which are selectively bred for high yield, rapid growth, disease and drought resistance, and other commercial production values, are supplanting landraces, putting more and more of them at risk of extinction.
In 1927 at the International Agricultural Congress, organized by the predecessor of the FAO, an extensive discussion was held on the need to conserve landraces. A recommendation that members organize nation-by-nation landrace conservation did not succeed in leading to widespread conservation efforts.
Landraces are often free from many intellectual property and other regulatory encumbrances. However, in some jurisdictions, a focus on their production may result in missing out on some benefits afforded to producers of genetically selected and homogenous organisms, including breeders' rights legislation, easier availability of loans and other business services, even the right to share seed or stock with others, depending on how favorable the laws in the area are to high-yield agribusiness interests.
As Regine Andersen of the Fridtjof Nansen Institute (Norway) and the Farmers' Rights Project puts it, "Agricultural biodiversity is being eroded. This trend is putting at risk the ability of future generations to feed themselves. In order to reverse the trend, new policies must be implemented worldwide. The irony of the matter is that the poorest farmers are the stewards of genetic diversity." Protecting farmer interests and protecting biodiversity is at the heart of the International Treaty on Plant Genetic Resources for Food and Agriculture (the "Plant Treaty" for short), under the Food and Agriculture Organization of the United Nations (FAO), though its concerns are not exclusively limited to landraces.
Landraces played a basic role in the development of the standardized breeds but are today threatened by the market success of the standardized breeds. In developing countries, landraces still play an important role, especially in traditional production systems. Specimens within an animal landrace tend to be genetically similar, though more diverse than members of a standardized or formal breed.
In situ and ex situ landrace conservation
Two approaches have been used to conserve plant landraces:
in situ where the landrace is grown and conserved by farmers on farms.
ex situ where the landrace is conserved in an artificial environment such as a gene-bank, using controls such as laminated packets kept frozen at .
As the amount of agricultural land dedicated to growing landrace crops declines, such as in the example of wheat landraces in the Fertile Crescent, landraces can become extinct in cultivation. Therefore ex situ landrace conservation practices are considered a way to avoid losing the genetic diversity completely. Research published in 2020 suggested that existing ways of cataloging diversity within ex situ genebanks fall short of cataloging the appropriate information for landrace crops.
An in situ conservation effort to save the Berrettina di Lungavilla squash landrace made use of participatory plant breeding practices in order to incorporate the local community into the work.
Preserving cereal landraces
Preservation efforts for cereal strains are ongoing including in situ and in online-searchable germplasm collections (seed banks), coordinated by Biodiversity International and the National Institute of Agricultural Botany (NIAB, UK). However, more may need to be done, because plant genetic variety, the source of crop health and seed quality, depends on a diversity of landraces and other traditionally used varieties. Efforts () were mostly focused on Iberia, the Balkans, and European Russia, and dominated by species from mountainous areas. Despite their incompleteness, these efforts have been described as "crucial in preventing the extinction of many of these local ecotypes".
An agricultural study published in 2008 showed that landrace cereal crops began to decline in Europe in the 19th century such that cereal landraces "have largely fallen out of use" in Europe. Landrace cultivation in central and northwest Europe was almost eradicated by the early 20th century, due to economic pressure to grow improved, modern cultivars. While many in the region are already extinct, some have survived by being passed from generation to generation, and have also been revived by enthusiasts outside Europe to preserve European agriculture and food culture elsewhere. These survivals are usually for specific uses, such as thatch, and traditional European cuisine and craft beer brewing.
Plants
Plant landrace development
The label landrace includes regional cultigens that are genetically heterogeneous, but with enough characteristics in common to permit their recognition as a group. These characteristics are used by farmers to manage diversity and purity within landraces.
In some cultures, the development of new landraces is typically limited to members of specific social groups, such as women or shaman. Maintaining existing landraces, like developing new landraces, requires that farmers be able to identify crop-specific characteristics and that those characteristics are passed on to following generations.
Over time, the process of identifying the distinguishing characteristic or features of a new landrace is reinforced by cultivation processes; for example, descendants of a plant that is notably drought tolerant may become iteratively more so through selective breeding as farmers regard it as better for dry areas and prioritize planting it in those locations. This is one way in which farming systems can develop a portfolio of landraces over time that have specific ecological niches and uses.
Conversely, modern cultivars can also be developed into a landrace over time when farmers save seed and practice selective breeding.
Although landraces are often discussed once they have become endemic to a particular geographical region, landraces have always been moved over long and short distances. Some landraces can adapt to various environments, while others only thrive within specific conditions. Self-fertilizing and vegetatively populated species adapt by changing the frequencies of phenotypes. Outbreeding crops absorb new genotypes through intentional and unintentional hybridization, or through mutation.
A clear example of vegetal landrace would consist in the diverse adaptations of wheat to differential artificial selection constraints.
Cultivars developed from landraces
Members of a landrace variety, selected for uniformity with regards to a unique feature over a period of time, can be developed into a farmers' variety or cultivar. Traits from landraces are valuable for incorporation into elite lines. Crop disease resistance genes from landraces can provide eternally-needed resistances to more widely-used, modern varieties.
Examples of plant landraces
Beans
Carrots
Maize
Okra
Peas
Peppers
Rice
Squash
Tomatillo
Tomatoes
Wheat
Animals
Animal landrace development
Some standardized animal breeds originate from attempts to make landraces more consistent through selective breeding, and a landrace may become a more formal breed with the creation of a breed registry or publication of a breed standard. In such a case, one may think of the landrace as a "stage" in breed development. However, in other cases, formalizing a landrace may result in the genetic resource of a landrace being lost through crossbreeding.
While many landrace animals are associated with farming, other domestic animals have been put to use as modes of transportation, as companion animals, for sporting purposes, and for other non-farming uses, so their geographic distribution may differ. For example, horse landraces are less common because human use of them for transport has meant that they have moved with people more commonly and constantly than most other domestic animals, reducing the incidence of populations locally genetically isolated for extensive periods of time.
Examples of animal landraces
Cats
Many standardized breeds have rather recently (within a century or less) been derived from landraces. Examples, often called natural breeds, include Arabian Mau, Egyptian Mau, Korat, Kurilian Bobtail, Maine Coon, Manx, Norwegian Forest Cat, Siberian, and Siamese.
In some cases, such as the Turkish Angora and Turkish Van breeds and their possible derivation from the Van cat landrace, the relationships are not entirely clear.
Cattle
Dogs
Dog landraces and the selectively bred dog breeds that follow breed standards vary widely depending on their origins and purpose.
Landraces are distinguished from dog breeds which have breed standards, breed clubs and registries.
Landrace dogs have more variety in their appearance than do standardized dog breeds. An example of a dog landrace with a related standardized breed with a similar name is the collie. The Scotch Collie is a landrace, while the Rough Collie and the Border Collie are standardized breeds. They can be very different in appearance, though the Rough Collie in particular was developed from the Scotch Collie by inbreeding to fix certain highly desired traits. In contrast to the landrace, in the various standardized Collie breeds, purebred individuals closely match a breed-standard appearance but might have lost other useful characteristics and have developed undesirable traits linked to inbreeding.
The ancient landrace dogs of the Fertile Crescent that led to the Saluki breed excels in running down game across open tracts of hot desert, but conformation-bred individuals of the breed are not necessarily able to chase and catch desert hares.
Goats
Some standardized breeds that are derived from landraces include the Dutch Landrace, Swedish Landrace and Finnish Landrace goats. The Danish Landrace is a modern mix of three different breeds, one of which was a "Landrace"-named breed.
Sheep
Horses
The wild progenitor of the domestic horse is extinct. It is rare for landraces among domestic horses to remain isolated, due to human use of horses for transportation, thus causing horses to move from one local population to another.
The heavy 'draft' type of domestic horse, developed in Europe, has differentiated into many separate landraces or breeds. Examples of horse landraces also include insular populations in Greece and Indonesia, and, on a broader scale, New World populations derived from the founder stock of Colonial Spanish horse.
The Yakutian and Mongolian Horses of Asia have "unimproved" characteristics.
Pigs
The standardized swine breeds named "Landrace" are often not actually landraces or derived from landraces. The Danish Landrace pig breed, pedigreed in 1896 from an actual local landrace, is the principal ancestor of the American Landrace (1930s). In this way, the Swedish Landrace is derived from the Danish and from other Scandinavian breeds, as is the British Landrace breed.
Chicken
Ducks
Geese
Many standardized goose breeds named "Landrace", e.g. the Twente Landrace goose, are not actually true landraces, but may be derived from them.
Rabbits
See also
References
External links
Short DIVERSEEDS video on crop wild relatives and landraces in the fertile crescent in Israel
Biology terminology
Breeds
Domesticated plants
Domesticated animals
Rare breed conservation | Landrace | Biology | 3,719 |
1,012,545 | https://en.wikipedia.org/wiki/Regge%20calculus | In general relativity, Regge calculus is a formalism for producing simplicial approximations of spacetimes that are solutions to the Einstein field equation. The calculus was introduced by the Italian theoretician Tullio Regge in 1961.
Overview
The starting point for Regge's work is the fact that every four dimensional time orientable Lorentzian manifold admits a triangulation into simplices. Furthermore, the spacetime curvature can be expressed in terms of deficit angles associated with 2-faces where arrangements of 4-simplices meet. These 2-faces play the same role as the vertices where arrangements of triangles meet in a triangulation of a 2-manifold, which is easier to visualize. Here a vertex with a positive angular deficit represents a concentration of positive Gaussian curvature, whereas a vertex with a negative angular deficit represents a concentration of negative Gaussian curvature.
The deficit angles can be computed directly from the various edge lengths in the triangulation, which is equivalent to saying that the Riemann curvature tensor can be computed from the metric tensor of a Lorentzian manifold. Regge showed that the vacuum field equations can be reformulated as a restriction on these deficit angles. He then showed how this can be applied to evolve an initial spacelike hyperslice according to the vacuum field equation.
The result is that, starting with a triangulation of some spacelike hyperslice (which must itself satisfy a certain constraint equation), one can eventually obtain a simplicial approximation to a vacuum solution. This can be applied to difficult problems in numerical relativity such as simulating the collision of two black holes.
The elegant idea behind Regge calculus has motivated the construction of further generalizations of this idea. In particular, Regge calculus has been adapted to study quantum gravity.
See also
Numerical relativity
Quantum gravity
Euclidean quantum gravity
Piecewise linear manifold
Euclidean simplex
Path integral formulation
Lattice gauge theory
Wheeler–DeWitt equation
Mathematics of general relativity
Causal dynamical triangulation
Ricci calculus
Twisted geometries
Notes
References
See chapter 42.
Chapters 4 and 6.
Available (subscribers only) at "Classical and Quantum Gravity".
Available at .
eprint
Available at "Living Reviews of Relativity". See section 3.
Available (subscribers only) at "Classical and Quantum Gravity".
External links
Regge calculus on ScienceWorld
Mathematical methods in general relativity
Simplicial sets
Numerical analysis | Regge calculus | Mathematics | 491 |
42,238,225 | https://en.wikipedia.org/wiki/Biomicrofluidics | Biomicrofluidics is a bimonthly peer-reviewed scientific journal covering all aspects of research on fundamental physicochemical mechanisms associated with microfluidic, nanofluidic, and molecular/cellular biophysical phenomena in addition to novel microfluidic and nanofluidic techniques for diagnostic, medical, biological, pharmaceutical, environmental, and chemical applications. The editors-in-chief are Hsueh-Chia Chang (University of Notre Dame) and Leslie Y. Yeo (RMIT University).
Abstracting and indexing
The journal is abstracted and indexed in the Science Citation Index, Current Contents/Physical Chemical and Earth Sciences, and BIOSIS Previews. According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.258.
References
External links
Fluid dynamics journals
American Institute of Physics academic journals
Bimonthly journals
Academic journals established in 2007
English-language journals | Biomicrofluidics | Chemistry | 187 |
73,323,441 | https://en.wikipedia.org/wiki/HD%20167096 | HD 167096, also known as HR 6818 or rarely 4 G. Coronae Australis, is a binary star located in the southern constellation Corona Australis. It has an apparent magnitude of 5.45, making it faintly visible to the naked eye. The system is located relatively close at a distance of 224 light years based on Gaia DR3 parallax measurements but is drifting closer with a poorly constrained heliocentric radial velocity of . At its current distance HD 167096's brightness is diminished by three tenths of a magnitudes due to interstellar dust and it has an absolute magnitude of +0.64.
The primary has a stellar classification of G8/K0 III, indicating that it is an evolved red giant with the characteristics of a G8 and K0 giant star. It has 1.11 times the mass of the Sun but it has expanded to 8.92 times the Sun's radius. It radiates 32.4 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of , giving it an orangish-yellow hue. It has a near solar metallicity at [Fe/H] = −0.02 and spins too slowly for its projected rotational velocity to be measured accurately. This is a binary star that completes a circular orbit within 1.81 years. Since the two components have a separation of only , it makes it difficult to measure their individual properties.
References
G-type giants
K-type giants
Binary stars
Corona Australis
Coronae Australis, 4
CD-44 12456
167096
089507
6818 | HD 167096 | Astronomy | 335 |
39,652,486 | https://en.wikipedia.org/wiki/Cheon%20Jinwoo | Cheon Jinwoo is the H.G. Underwood Professor at Yonsei University and the Founding Director of the Center for Nanomedicine, Institute for Basic Science (IBS). As a leading chemist in inorganic materials chemistry and nanomedicine Cheon and his research group mainly focus on developing chemical principles for synthesizing complex inorganic materials and nanoprobes/actuators used in imaging and controlling of cellular functions within the deep tissue in living systems.
Throughout his career, he has received numerous prestigious awards, including Inchon Prize (2010), ChungAm Prize (2012), Ho-Am Prize (2015), Clarivate Analytics Highly Cited Researcher in the field of chemistry (2014, 2015, 2016) and cross-field (2018). He is a fellow of the American Chemical Society, Royal Society of Chemistry, and Korean Academy of Science and Technology. In addition to his research, he serves as a senior editor of Accounts of Chemical Research and an editorial advisory board member of several leading journals, including Journal of Materials Chemistry, Nano Letters, Materials Horizons, Chemical & Engineering News and Journal of the American Chemical Society.
Education
Cheon began his academic journey at Yonsei University in 1981, where he majored in chemistry. He earned both his Bachelor of Science and Master of Science in 1985 and 1987 at Yonsei University, respectively. In 1993, under the guidance of Professor Gregory S. Girolami, he received his Ph.D. in chemistry from the University of Illinois at Urbana-Champaign.
Career
Following his doctoral studies, he continued his research as a postdoctoral fellow at the University of California Berkeley. For the next three years, he was a staff research associate at UC, Los Angeles (UCLA) before returning to South Korea to work as an assistant and then associate professor at KAIST. His research at KAIST focused on geometrical shape control of nanoparticles and magnetic particles. This also marked his first publication on nanocrystals which is a reoccurring interest in his research career and a source of multiple highly cited articles.
He started working at Yonsei University as a full professor in 2002 and later became the Horace G. Underwood Professor in 2008. From 2010 to 2016, Cheon was the director of the National Creative Research Initiative Center for Evolutionary Nanoparticles. In 2015, he became the founding director of IBS Center for Nanomedicine at the Yonsei University (IBS CNM at Yonsei) in Seoul and has been serving as a director since then.
His research at Yonsei on nanoscale phenomena has led to nanomaterial applications in biology, including highly sensitive MRI contrast agents and nanoscale toolkits for cells. In 2004, he demonstrated the principle of size-dependent MRI contrast effects using nanoparticles which enabled the development of magnetism-engineered iron oxide (MEIO) as an ultra-sensitive nanoparticle MRI contrast agent which might help detect early-stage cancer. Cheon also has developed magnetic nanomachines which have various mechanical components in hundreds nanometer scale, allowing remote and precise controlling of nanostructures using magnetic fields. These nanomachines are promising for targeted drug delivery and minimally invasive surgery, enhancing functionality by such mechanical components. From 2021, his work has notably advanced the field of magnetogenetics for wireless control of deep tissue in vivo systems, especially brain activities. Magnetogenetics uses mechanical torque or force of magnetic nanoparticles to manipulate neuronal activity with mechanosensitive ion channels (e.g. Piezo1), offering a non-invasive method in meter scale long-distance to control brain function. This innovative approach allows for the precise regulation of neuronal circuits using magnetic fields, providing potential new treatments for neurological disorders and insights into brain function.
Awards and honors
2024: Glenn T. Seaborg Memorial Lecture in Inorganic Chemistry
2024: Humboldt Research Award
2018: Madhuri and Jagdish N. Sheth International Alumni Award for Exceptional Achievement (University of Illinois)
2015: Ho-Am Prize in Science (HOAM Foundation)
2014: The World's Most Influential Scientific Minds (Thomson Reuters)
2013: KCS Academic Achievement Award (Korean Chemical Society)
2012: POSCO TJ Park Prize (POSCO TJ Park Foundation)
2012: Korea's 100 Most Influential Person for Next 10 Years (DongA Daily News)
2010: Inchon Award (Inchon Memorial Foundation)
2007: Song-Gok Science Prize (Korea Institute of Science and Technology)
2004: KCS Award in Inorganic Chemistry (Korean Chemical Society)
2002: Young Scientist Award, Korean Academy of Science and Technology
2001: Wiley Young Chemist Award (Korean Chemical Society-Wiley & Sons)
Selected recent publications
Lin, Mouhong; Lee, Jung-uk; Kim, Youngjoo; Kim, Gooreum; Jung, Yunmin; Jo, Ala; Park, Mansoo; Lee, Sol; Lah, Jungsu David; Park, Jongseong; Noh, Kunwoo; Lee, Jae-Hyun; Kwak, Minsuk; Lungerich, Dominik; Cheon, Jinwoo (2024-05). "A magnetically powered nanomachine with a DNA clutch". Nature Nanotechnology. 19 (5): 646–651. doi:10.1038/s41565-023-01599-6. ISSN 1748-3395.
Kim, W.-S.; Min, S.; Kim, S.K.; Kang, S.; An, S.; Criado-Hidalgo, E.; Davis, H.; Bar-Zion, A.; Malounda, D.; Kim, Y.H.; Lee, J.-H.; Bae, S.H.; Lee, J.G.; Kwak, M.; Cho, S.-W.; Shapiro, M.G.; Cheon, J. “Magneto-acoustic protein nanostructures for non-invasive imaging of tissue mechanics in vivo” Nat. Mater., 2024, 23, 290.
Lee, J. U.; Shin, W.; Lim, Y.; Kim, J.; Kim, W. R.; Kim, H.; Lee, J. H.; Cheon, J. "Non-contact long-range magnetic stimulation of mechanosensitive ion channels in freely moving animals" Nat. Mater., 2021, 20, 1029.
Shin, T.-H.; Kim, P. K.; Kang, S.; Cheong, J.; Kim, S.; Lim, Y.; Shin, W.; Jung, J.-Y.; Lah, J. D.; Choi, B. W.; Cheon, J. "High-resolution T1 MRI via renally clearable dextran nanoparticles with an iron oxide shell" Nat. Biomed. Eng., 2021, 5, 252.
Cheong, J.; Yu, H.; Lee, C. Y.; Lee, J.-u.; Choi, H.-J.; Lee, J.-H.; Lee, H.; Cheon, J. "Fast detection of SARS-CoV-2 RNA via the integration of plasmonic thermocycling and fluorescence detection in a portable device" Nat. Biomed. Eng., 2020, 4, 1159.
Lim, Y.; Lee, C.-H.; Jun, C.-H.; Kim, K.; Cheon, J. “Morphology-conserving non-kirkendall anion exchange of metal oxide nanocrystals” J. Am. Chem. Soc. 2020, 142, 9130
Choi, J.; Kim, S.; Cheon, J. et al. "Distance-dependent magnetic resonance tuning as a versatile MRI sensing platform for biological targets" Nat. Mater. 2017, 16, 537.
Seo, D.; Southard, K.M.; Cheon, J.; Jun, Y-w. et al. "A Mechanogenetic toolkit for interrogating cell signaling in space and time" Cell, 2016, 165, 1507.
References
External links
Google Scholar Jinwoo Cheon
IBS Center for NanoMedicine
1962 births
Living people
Institute for Basic Science
Recipients of the Ho-Am Prize in Science
South Korean organic chemists
POSCO TJ Park Prize
Humboldt Research Award recipients | Cheon Jinwoo | Chemistry | 1,787 |
1,368,490 | https://en.wikipedia.org/wiki/Rolls-Royce%20Avon | The Rolls-Royce Avon was the first axial flow jet engine designed and produced by Rolls-Royce. Introduced in 1950, the engine went on to become one of their most successful post-World War II engine designs. It was used in a wide variety of aircraft, both military and civilian, as well as versions for stationary and maritime power.
An English Electric Canberra powered by two Avons made the first un-refuelled non-stop transatlantic flight by a jet, and a BOAC de Havilland Comet 4 powered by four Avons made the first scheduled transatlantic crossing by a jet airliner.
Production of the Avon aero engine version ended after 24 years in 1974. Production of the Avon-derived industrial version continues to this day, by Siemens since 2015.
The current version of the Avon, the Avon 200, is an industrial gas generator that is rated at . As of 2011, 1,200 Industrial Avons have been sold, and the type has established a 60,000,000 hour record for its class.
Design and development
The engine was initially a private venture put forward for the English Electric Canberra. Originally known as the AJ.65 for Axial Jet, 6,500 lbf the engine was based on an initial project concept by Alan Arnold Griffith. which combined an axial compressor with a combustion system and single-stage turbine using principles proven in the Rolls-Royce Nene engine.
Design work began in 1945. The Avon design team was initially headed by Stanley Hooker assisted by Geoff Wilde. Development of the engine was moved from Barnoldswick to Derby in 1948 and Hooker subsequently left the company, moving to Bristol Engines.
The first engine ran on 25 March 1947, with a 12-stage compressor. The engine was difficult to start, would not accelerate and broke first-stage blades. Two-position inlet guide vanes and compressor bleed were among the design changes which allowed the engine, as the RA.2, to run a 25-hour test and fly in the two outboard positions on the converted Avro Lancastrian military serial VM732, from Hucknall on 15 August 1948.
The first production engine, which needed a two-stage turbine, was the RA.3, or Avon Mk 101. Several modified versions of this design were produced in the Mk. 100 series.
The Avon 200 series was a complete redesign having very little in common with earlier Marks. Differences included a completely new combustion section and a 15-stage compressor based on that of the Armstrong-Siddeley Sapphire. The first application was the Vickers Valiant.
Operational history
The engine entered production in 1950 as the RA.3/Mk.101 with thrust in the English Electric Canberra B.2. Similar versions were used in the Canberra B.6, Hawker Hunter and Supermarine Swift. Uprated versions followed, the RA.7/Mk.114 with thrust in the de Havilland Comet C.2, the RA.14/Mk.201, in the Vickers Valiant and the RA.26, used in the Comet C.3 and Hawker Hunter F.6. An Avon-powered de Havilland Comet 4 flew the first scheduled transatlantic jet service in 1958. The highest thrust version was the RA.29 Mk.301/2 (RB.146) used in later versions of the English Electric Lightning. It produced with afterburning. Other aircraft to use the Avon included the de Havilland Sea Vixen, Supermarine Scimitar and Fairey Delta2.
The RA.3/Mk.109 was produced under licence by Svenska Flygmotor as the RM5, and an uprated RA.29 as the RM6 with thrust. The RM5 powered the Saab 32 Lansen and the RM6 powered the Saab 35 Draken and all-weather fighter version of the Lansen (J 32B).
300 Avon 113s, and a larger number of Avon 203s were produced under licence in Belgium by Fabrique Nationale Division Moteurs .
In the US the RA.28-49 was used in the VTOL Ryan X-13 Vertijet aircraft.
In Australia, the Avon was used by Commonwealth Aircraft Corporation in the CA-27 Avon-Sabre.
The Avon continued in production for the Sud Aviation Caravelle and English Electric (BAC) Lightning until 1974, by which time over 11,000 had been built. It remained in operational service with the RAF until 23 June 2006 in the English Electric Canberra PR.9.
Initial design work was done on the 2-spool RB.106/RB.128 as an Avon successor for large supersonic fighters.
Variants and designations
AJ65The original designation, standing for Axial Jet 6,500 lbf thrust
RA.1Prototype engines for testing and development.
RA.2Pre-production engines for testing –
RA.3Civil designation for the first Avon production mark. First avon with a two-stage turbine. –
RA.7Civil designation for the uprated version of the Avon RA.3. Electrically started. –
RA.7RRA.7 with reheat. Meant for use with an afterburner. Explosive-cartridge started. – without afterburner, with afterburner.
RA.14Civil designation for the uprated version of the Avon with can-annular combustion chamber and Sapphire style compressor –
RA.14RRA.14 with reheat. – without afterburner, with afterburner.
RA.19
RA.19RRA.19 with reheat. – with afterburner.
RA.21Production engine developed from the RA.7 –
RA.21RProduction engine developed from the RA.7R. Same as the Avon Mk.21. – without afterburner, with afterburner.
RA.23RRA.23 with reheat. – without afterburner, with afterburner.
RA.24
RA.24RSame as the Avon Mk.47A.
RA.25Civil Mk.503
RA.26Further improvements to the Avon 200 series – Civil Mk.521
RA.28Second generation variant –
RA.29Civil designation for the Mk.300 series (used by the Sud Aviation Caravelle)
RA.29/1
RA.29/3
RA.29/6Same as the Avon Mk.533 –
RB.146Rolls-Royce designation for Avon Series 300
Avon Series 100
Avon Series 100 are early military versions of the Avon.
Avon Mk.100Military designation for the RA.3 Avon –
Avon Mk.101C
Avon Mk.113
Avon Mk.114Military designation for the RA.7 Avon –
Avon Mk.115Same as the Avon Mk.23 –
Avon Mk.117
Avon Mk.118
Avon Mk.20Australian version built on license by CAC for the CAC Sabre Mk.31 –
Avon Mk.21Afterburning Swedish version built by RR and on license by SFA for the Saab 32A/C. Same as the RA.21R. Designated RM5A1. – without afterburner, with different afterburners.
Avon Mk.21AImproved Mk.21 with increased diameter on the engine outlet for more power. Built by RR and on license by SFA for the Saab 32A/C. Designated RM5A2. – without afterburner, with different afterburners.
Avon Mk.23Same as the Avon Mk.115. Non-afterburning Swedish version built by RR for the Hawker Hunter Mk.50. Designated RM5B1. –
Avon Mk.24Non-afterburning Swedish version built by RR for the Hawker Hunter Mk.50. Designated RM5B2.
Avon Mk.25Non-afterburning Swedish version built by RR for the Hawker Hunter Mk.50. Designated RM5B3.
Avon Mk.26Australian version built by CAC for the CAC Sabre Mk.32 –
Avon Series 200
Avon Series 200 are uprated military versions of the Avon with can-annular combustion chamber and Sapphire style compressor.
Avon Mk.200 –
Avon Mk.47AAfterburning Swedish version built by RR and on license by SFA for the Saab 32B. Same as the RA.24R. Designated RM6A. – without afterburner, with afterburner.
Avon Mk.48AAfterburning Swedish version built by RR and on license by SFA for the Saab 35A/B/C. Designated RM6B. – without afterburner, with afterburner.
Avon Series 300
Avon Series 300 are further developed military after-burning versions of the Avon for the English Electric Lightning.
Avon Mk.300 –
Avon Mk.301The ultimate Military Avon for the English Electric Lightning – dry, wet.
Avon Mk.302Essentially similar to the Mk.301
Avon Mk.60Afterburning Swedish version built by RR and on license by SFA for the Saab 35 Draken D/F. Same as the RA.29R. Designated RM6C. – without afterburner, with afterburner.
Westinghouse XJ54Avon 300-series scaled-down by Westinghouse to 105 lb/sec airflow to produce 6,200 lb thrust.
Avon Series 500
Avon Series 500 are civilian equivalents to the military Avon Series 200 variants.
Avon Mk.504
Avon Mk.506
Avon Mk.521
Avon Mk.522
Avon Mk.524
Avon Mk.524B
Avon Mk.525
Avon Mk.525B
Avon Mk.527
Avon Mk.527B
Avon Mk.530
Avon Mk.531
Avon Mk.531B
Avon Mk.532R
Avon Mk.532R-B
Avon Mk.533Same as the RA.29/6 –
Avon Mk.533R
Avon Mk.533R-11A
Swedish designations
Reaktionsmotor 3A – RM3ASwedish designation for the Avon Mk.101C
Reaktionsmotor 5A1 – RM5A1Swedish designation for the Avon Mk.21
Reaktionsmotor 5A2 – RM5A2Swedish designation for the Avon Mk.21A
Reaktionsmotor 5B1 – RM5B1Swedish designation for the Avon Mk.23
Reaktionsmotor 5B2 – RM5B2Swedish designation for the Avon Mk.24
Reaktionsmotor 5B3 – RM5B3Swedish designation for the Avon Mk.25
Reaktionsmotor 6A – RM6ASwedish designation for the Avon Mk.47A
Reaktionsmotor 6B – RM6BSwedish designation for the Avon Mk.48A
Reaktionsmotor 6C – RM6CSwedish designation for the Avon Mk.60
Applications
Military aviation
CAC CA-23 (cancelled)
CAC Sabre
de Havilland Sea Vixen
English Electric Canberra
English Electric Lightning
Fairey Delta 2
Hawker Hunter
Ryan X-13 Vertijet
Saab 32 Lansen
Saab 35 Draken
Supermarine Swift
Supermarine Scimitar
Vickers Valiant
Civil aviation
de Havilland Comet
Lockheed L-193 (cancelled)
Sud Aviation Caravelle
Other uses
The Avon is also currently marketed as a compact, high reliability, stationary power source. As the AVON 1533, it has a maximum continuous output of 21,480 shp (16.02 MW) at 7,900 rpm and a thermal efficiency of 30%. An example can be found at Didcot Power Station in the United Kingdom where four Avon generators are used to provide Black start services to assist in a restart of the National Grid in the event of a system-wide failure, or to provide additional generating capacity in period of very high demand.
As a compact electrical generator, the type EAS1 Avon based generator can generate a continuous output of 14.9 MW.
On 4 October 1983, Richard Noble's Thrust2 vehicle, powered by a single Rolls-Royce Avon 302 jet engine, set a new land-speed record of at the Black Rock Desert in Nevada.
Surviving engines
Several Avon-powered Hawker Hunter aircraft remain airworthy in private ownership in 2010.
Thunder City in South Africa as of 2011 operated two Avon-powered English Electric Lightnings.
SWAHF operates three Saab Lansen and two Saab Draken airworthy for air shows.
Engines on display
A Rolls-Royce Avon Mk 1 is on display at Amrita University, Coimbatore, Tamil Nadu in the Department of Aerospace Engineering's Lab.
A Mk 524 Avon has been restored at the Museo Nacional de Aeronáutica de Argentina by the Museum Friend's Association in Moron, Argentina and is now on display.
An Avon Mk.203 was donated by Rolls-Royce to the National Museum of the United States Air Force in July 1986 for public display.
A Rolls-Royce Avon is on public display at the Midland Air Museum.
A preserved Rolls-Royce Avon Mk.203 is on display at the Royal Air Force Museum London.
A partially sectioned Mk.101 Avon is on display at the Royal Air Force Museum Cosford.
A Rolls-Royce Avon is on display at the Australian National Aviation Museum, Moorabbin, Victoria, Australia
A Rolls-Royce Avon is on public display at East Midlands Aeropark
A Rolls-Royce Avon is on display at the Fleet Air Arm Museum at RNAS Yeovilton.
Several RR Avon engines are on display at the Queensland Air Museum, Caloundra, Australia
A Rolls-Royce Avon engine is on public display at the Historical Aircraft Restoration Society museum at Illawarra Regional Airport, New South Wales, Australia.
A Rolls-Royce Avon engine is on public display at the Parkes Aviation Museum in Parkes, New South Wales, Australia.
A Rolls-Royce Avon is on display at the Classic Flyers Aircraft Museum, Mt Maunganui, Bay of Plenty, New Zealand.
A Rolls-Royce Avon Mk.26 is on display at Mikes Dyno Tuning and Performance Engines, Dandenong, Victoria, Australia
A Rolls-Royce Avon (GAF) is on display at the South Australian Aviation Museum, Port Adelaide, South Australia.
A Rolls-Royce Avon is on public display in the car park (under cover) at South Lanarkshire College, East Kilbride as an exhibit about Nae Pasaran.
A Rolls-Royce Avon MK 101 is on display at the entrance foyer of Faculty of Engineering, University of Peradeniya which was gifted by Professor Selvadurai Mahalingam
A partially sectioned Avon is on public display at the City of Norwich Aviation Museum in Horsham St Faith, Norfolk.
A Rolls-Royce Avon engine is on public display in The Charlesworth Transport Gallery, at Kelham Island Museum, Sheffield.
Specifications (Avon 301R)
See also
References
Notes
Bibliography
Gunston, Bill. World Encyclopaedia of Aero Engines. Cambridge, England. Patrick Stephens Limited, 1989.
External links
The fascinating story of the Rolls Royce Avon turbojet engine, the first Rolls Royce axial flow turbojet
National Museum of USAF – Avon MK 203 Turbojet
"Rolls-Royce Avon" a 1955 Flight article on the Avon
"Rolls-Royce Avon 200 Series" a 1957 Flight article
Avon
1940s turbojet engines
Aero-derivative engines
Products introduced in 1950
Axial-compressor gas turbine engines | Rolls-Royce Avon | Technology | 3,135 |
2,392,458 | https://en.wikipedia.org/wiki/Micro-inequity | A Micro-inequity is a small, often overlooked act of exclusion or bias that could convey a lack of respect, recognition, or fairness towards marginalized individuals. These acts can manifest in various ways, such as consistently interrupting or dismissing the contributions of a particular group during meetings or discussions. The theory of micro-inequity helps elucidate how individuals may experience being overlooked, ignored, or harmed based on characteristics like race, gender, or other perceived attributes of disadvantage, including political views and marital status. This falls within the broader marginalizing micro-level dynamics that refer to subtle, often unnoticed mechanisms within a society that contribute to the exclusion, disempowerment, or disadvantage of certain individuals or groups. These dynamics operate at a granular level, perpetuating inequalities and disparities in resource distribution, access to opportunities, and overall participation in social, economic, and political spheres. Micro-inequities, micro-affirmations, and micro-advantages are often executed using coded language or subtle non-verbal cues, formally in written communications or informally in conversations, known as micro-messaging. The term originated in 1973.
Overview
Maryville University defines micro-inequities as subtle messages that devalue, discourage, and impair workplace performance. These messages are conveyed through facial expressions, gestures, tone of voice, word choices, nuance, and syntax that are relayed both consciously and unconsciously. Repeated sending or receiving of micro-inequities can erode personal and professional relationships. The Star-Ledger article, "Micro-messages Matter" by Steve Adubato says that, "only the most astute and aware communicators recognize how [micro-messages] are received and perceived."
These messages can reveal more about the true nature of a relationship than words alone. The messages function as the core of how unconscious bias is communicated and how workplace exclusion is experienced. In the Profiles in Diversity Journal article "The DNA of Culture Change," Joyce Tucker writes, "Organizations have done a great job at controlling the big, easily-seen offensive behaviors but have been somewhat blind to what is rarely observed. Organizations have done great work at controlling the few elephants while being overrun by a phalanx of ants. Listening with your arms folded, losing eye contact with the person you're speaking with, or even the way you move your lips to shape a smile—in any given conversation, we may send hundreds of messages, often without even saying a word. Just as television or radio waves surround us, yet we never see them, these micro-messages are just as pervasive and nearly as difficult to discern."
Early studies
Mary Rowe of MIT, coined the terms micro-inequities and micro-affirmations in 1973, building upon previous research on micro-aggression by Chester Pierce, specifically around racial hostility. Originally, Rowe referred to micro-inequities as the "Saturn's Ring Phenomenon" because the planet's rings protect and insulate it from the harshness of the world outside, much like the workplace culture created by micro-affirmations. Some of these papers were published in whole or in part in 1974. After that, a relatively complete version came out in 1990. Rowe published a longer article, "Micro-affirmations and Micro-inequities," in the Journal of the International Ombudsman Association, which includes more of her hypotheses about the importance of micro-affirmations. Earlier works in the same genre include that of Jean-Paul Sartre, who wrote about small acts of anti-Semitism, and Chester Pierce, who wrote about micro-aggressions as acts of racism and "childish" acts against children.
Mary Rowe's original research studied the impact micro-messages have on the academic community and relationships in general in the United States and worldwide. The first broad introduction of micro-inequities in the corporate workplace was initiated in 2002 by Insight Education Systems. It established the link between micro-messaging and corporate diversity and inclusion initiatives.
Definition
In the original articles on this subject in the 1970s (see references below), Mary Rowe defined micro-inequities as "apparently small events which are often ephemeral and hard-to-prove, events which are covert, often unintentional, frequently unrecognized by the perpetrator, which occur wherever people are perceived to be different." She wrote about homophobia, reactions to perceived disabilities, reactions to physical appearance, reverse discrimination against white and Black males in traditionally female environments, and various religious slights. She collected instances of micro-inequities anywhere at work or in communities—anywhere in the world—that people are perceived to be "different."
These differences indeed reach beyond unchangeable characteristics such as race or gender. In his book, "Micro messaging: Why Great Leadership is Beyond Words" (2006 McGraw-Hill), Stephen Young describes the impact micro-inequities have on an individual's workplace performance through additional factors, such as one's political views, marital status, tenure, style, resistance to comply with status quo and other characteristics that are changeable.
Young states that these drivers of unconscious bias reflect the positions people hold about others that are influenced by past experiences, forming filters that cause conclusions to be reached about a group or ethnicity through methods other than active thought or reasoning. The critical limitation of unconscious bias is that it is a concept, a state of mind, and therefore not consciously or intentionally displayed. The only way unconscious biases are manifested is through the subtle messages individuals send—typically, micro-inequities affect the performance of others.
Micro-affirmations and micro-advantages
A micro-affirmation, in Rowe's writing, is the reverse phenomenon. Micro-affirmations are subtle or small acknowledgments of a person's value and accomplishments. They may take the shape of public recognition of the person, "opening a door," referring positively to a person's work, commending someone on the spot, or making a happy introduction. "Small" affirmations form the basis of successful mentoring, effective networks, successful colleague-ships, and most caring relationships. They may lead to greater self-esteem and improved performance. In 2015, Rowe collected her hypotheses about the potential power of micro-affirmations:
"Blocking unconscious bias: We could try to practice—all the time—affirming the achievements of others. If we always look for excellence in the work of others and are universally respectful, may we be able to block our own unconscious bias?
Ameliorate damage: Can micro-affirmations (for example, in affinity groups and mentoring programs) make up for some of the damage caused by unconscious bias?
Meeting a core emotional concern: Since research suggests that appreciation and affirmation are core concerns for all of us, may this plan help in making the workplace more productive?
Evoking reciprocal affirmation: Since research suggests an impulse toward "reciprocity," may affirm behavior spread as we respond to support from others?
A possible role modeling effect: Research suggests that people are sensitive to the morale and happiness of those around them and especially sensitive to the behavior of a local manager. If managers, bystanders, and others are role models for affirming behavior, will some others follow suit? Peers and bystanders are often the most important actors because they are most likely to be present where people act in a biased fashion.
Rectifying our own unconscious bias: Research suggests that behavior follows attitudes. Attitudes also can be changed by behavior. If we consciously improve our behavior, may we lessen our unconscious bias?"
In 2021, Mary Rowe wrote of the influence of micro-affirmations in building a sense of "belonging."
There is a difference between "inequality" and "inequity." Inequality implies there is some comparison being made. For example, if a boss doesn't listen attentively to an employee, that in and of itself is not a micro-inequality. However, if the boss listens attentively to all of an employee's coworkers but not that employee, that might be a micro-inequality.
Inequity, by contrast, is simply something that may be perceived as unfair or unjust under the circumstances. Thus, a micro-inequity may occur with only one person present if that person is treated unfairly or unjustly. Similarly, a micro-affirmation may refer to "only one" person and does not imply any sense of advantage over others but rather provides support, inspiration, and encouragement to the affirmed individual.
An alternate perspective to Mary Rowe's "reverse phenomenon" of micro-affirmations theory is Stephen Young's introduction of a third layer, micro-advantages. Micro-advantages are subtle, often unconscious, messages that motivate, inspire, and enhance workplace performance. Like micro-inequities, they are conveyed through facial expressions, gestures, tone of voice, choice of words, nuance and syntax. Applied effectively, micro-advantages can unlock employee potential, enabling engagement, creativity, loyalty, and performance. Micro-advantages are central to effective leadership. An affirmation is a statement asserting existence or truth in a way that helps the person affirmed; a micro-advantage is a subtle message that motivates and inspires performance in the workplace or classroom.
In culture
Micro-inequities may concern race, religion, color, disability, sexual identity, social class, and national origin. Some are embodied in language that links certain derogatory stereotypes with a particular race. Examples of such micro-inequities would be the terms "an Indian giver" and "to gyp," or the phrase "to Jew down." Other examples include the casual use of the term "she" while referring to individuals in occupations that have been predominantly women, such as teachers, nurses, and secretaries, and the disrespect sometimes exhibited toward fathers as full-time homemakers.
Elimination of micro-inequities is a current focus of some universities, businesses, and government agencies as a key diversity strategy. According to some experts, micro-inequities can slowly and methodically erode a person's motivation and sense of worth. This may result in absenteeism, poor employee retention, and loss of productivity. In the article "Sizing Up What's Being Said" in The Sacramento Bee, nine techniques are outlined that help minimize the negative effect of micro-inequities.
Modern media is also responsible for the perpetuation of micro-inequities. People of color have been portrayed negatively; eminent people of color are poorly represented in Western media. Examples would include the fallacious belief that African Americans are the majority of those on welfare in the US. Many Native Americans are sensitive to the idea that "Columbus discovered" their land. Feagin and Benokraitis note that the mass media has portrayed women negatively in many respects; for example, women are portrayed as sexual objects in many music videos.
In Julie Rowe's Time Magazine article "Why Your Boss May Be Sweating the Small Stuff," she outlines workplace micro-inequity applications and how they influence performance. Rowe states, "It used to be that [micro-inequities were] tone-deaf moments used to buttress discrimination claims. Now they are becoming the basis for [validating] those claims."
Micro-messaging has varying impacts in academic and corporate settings. In academia, students predominantly receive knowledge from educators. Conversely, the corporate environment emphasizes collaboration, with leaders drawing on the expertise of their team members. Raising the knowledge of micro-messaging in the corporate sector can "make even hardened executives recognize themselves, or at the very least, their superiors" as senders of micro-inequities, according to Young. Since micro-inequities represent each person's status quo of behavior, it normally requires experiential examples on the receiving side to understand their impact on altering performance. Stephen Young and Mary Rowe agree, "A good way to deal with micro-inequities is to bring them to the forefront through discussion."
Further research and controversy
Mary Rowe defined micro-inequities as "small events that may be ephemeral and hard to prove" and stated that "it is not easy to measure the effects of gender micro-inequities because effects of unfair behavior may differ by context." There is a growing body of scholarly research on unconscious bias. Much of the modern approach has used an Implicit Association Test rather than Questionnaires or interviews. However, many scholars have published articles and analyses doubting the efficacy and validity of this research.
A book on the same subject was written pseudonymously in the late 1970s by Mary Howell, MD, of Harvard Medical School. Under the name of "Margaret Campbell, MD," Howell wrote, "Why Would a 'Girl' Want to go into Medicine?"
Wesley E. Profit wrote his Harvard doctoral thesis on the micro-inequities of racism. Ellen Spertus, an MIT student at the time, did a small study, "Why Are There So Few Female Computer Scientists?", MIT Artificial Intelligence Laboratory Technical Report 1315, August 1991. This is one of many such studies from various departments at MIT.
Frances K. Conley, then of Stanford Medical School, published "Walking Out on the Boys" in 1998, which deals with her experience as a woman neurosurgeon and sexism in the medical profession. Stephen Young uses the concept of "micro-advantages," rather than "micro-affirmations." He published "Micro-Messaging" in 2006 (McGraw-Hill). Scholarly works include "Why So Slow? The Advancement of Women" by Virginia Valian, MIT Press, 1999, and the article "What Knowers Know Well: Women, Work, and the Academy," Alison Wylie, University of Washington, 2009.
Recently, a great deal of work has been done by various consultants, experts doing research in the social sciences and neuroscience, and leaders in the field of diversity. After earning a communications degree from Emerson College, Stephen Young entered finance and eventually became senior vice president of JP Morgan Chase, managing the firm's global diversity strategy. Inspired by MIT Professor Mary P. Rowe's decades of research into what she called "micro-inequities" in colleges and the workplace, he became a consultant and developed seminars to sensitize executives to the full range of what he calls "micro-messages." Young's company, Insight Education Systems, founded in 2002, has helped implement his program at Starbucks, Raytheon, Cisco, IBM, Merck, and other Fortune 500 corporations.
References
Rowe, Mary, "The Minutiae of Discrimination: The Need for Support," in Forisha, Barbara and Barbara Goldman, Outsiders on the Inside, Women in Organizations, Prentice-Hall, Inc., New Jersey, 1981, Ch. 11, pp. 155–171.
Discrimination
Anthropology | Micro-inequity | Biology | 3,131 |
17,505,689 | https://en.wikipedia.org/wiki/PV%20Crystalox%20Solar | PV Crystalox Solar plc is a supplier to solar cell manufacturers, producing multicrystalline silicon wafers for use in solar electricity generation systems. It has operations in Germany, United Kingdom and Japan and its headquarters are in the United Kingdom. It is listed on the London Stock Exchange and was a former constituent of the FTSE 250 Index.
History
Crystalox was established as a private limited company in 1982 in the town of Wantage in Oxfordshire specialising in the design and manufacture of equipment for purification and crystal growth of metals, alloys, semiconductors and electro-optic materials. In 1990 the company started development of industrial production systems for directional solidification of multicrystalline silicon for the solar cell industry.
In 1994, there was a management buy-out by 6 senior managers. 1997 saw the incorporation of PV Silicon AG in Erfurt, Germany, a specialist silicon wafer producer. A strategic partnership was formed between Crystalox and PV Silicon in 1999, which progressed to the merging of the two companies in 2002 and the incorporation of PV Crystalox AG. In 2006, the decision was taken to build a silicon production plant in Germany.
In June 2007, the company made its debut on the London Stock Exchange to raise additional funds for in-house silicon production and to further expand its international business. Between September 2007 and March 2010 the group was a constituent of the FTSE 250 index.
Technology and products
The company's industrial focus comprises the production of the polysilicon raw material at its factory in Bitterfeld in Germany, melting and crystallization of the silicon to form ingots at various sites around Abingdon in the UK, and converting the ingots into thin multicrystalline silicon wafers internally at the company's facility in Erfurt in Germany and externally at subcontractors in Japan.
These wafers are then sold to companies in the photovoltaics industry, where they are transformed into solar cells using semiconductor processing technology, linked together in strings and laminated between glass and plastic sheets to form durable modules.
References
External links
Official site
Companies listed on the London Stock Exchange
Companies established in 1982
Companies based in Oxfordshire
Engineering companies of the United Kingdom
Silicon wafer producers
Solar energy companies of the United Kingdom
Photovoltaics manufacturers
1982 establishments in England
British brands | PV Crystalox Solar | Engineering | 459 |
30,936 | https://en.wikipedia.org/wiki/TiVo | TiVo ( ) is a digital video recorder (DVR) developed and marketed by Xperi (previously by TiVo Corporation and TiVo Inc.) and introduced in 1999. TiVo provides an on-screen guide of scheduled broadcast programming television programs, whose features include "OnePass" schedules which record every new episode of a series, and "WishList" searches which allow the user to find and record shows that match their interests by title, actor, director, category, or keyword. TiVo also provides a range of features when the TiVo DVR is connected to a home network, including film and TV show downloads, advanced search, online scheduling, and at one time, personal photo viewing and local music playback.
Since its launch in its home market of the United States, TiVo has also been made available in Australia, Canada, Mexico, New Zealand, Puerto Rico, Sweden, Taiwan, Spain, and the United Kingdom. Newer models, however, have adopted the CableCARD standard, which is only deployed in the United States, and which limits the availability of certain features.
History and development
TiVo was developed by Jim Barton and Mike Ramsay through a corporation they named "Teleworld" which was later renamed to TiVo Inc. Though they originally intended to create a home network device, it was redesigned as a device that records digitized video onto a hard disk.
After exhibiting at the Consumer Electronics Show in January 1997, Mike Ramsay announced to the company that the first version of the TiVo digital video recorder would be released "in Q1" (the last day of which is March 31) despite an estimated 4 to 5 months of work remaining to complete the device. Because March 31, 1999, was a blue moon, the engineering staff code-named this first version of the TiVo DVR "Blue Moon".
The original TiVo DVR digitized and compressed analog video from any source (antenna, cable or direct broadcast satellite). TiVo also integrated its DVR service into the set-top boxes of satellite and cable providers. In late 2000, Philips Electronics introduced the DSR6000, the first DirecTV receiver with an integrated TiVo DVR. This new device, nicknamed the "DirecTiVo", stored digital signals sent from DirecTV directly onto a hard disk.
In early 2000, TiVo partnered with electronics manufacturer Thomson Multimedia (now Technicolor SA) and broadcaster British Sky Broadcasting to deliver the TiVo service in the UK market. This partnership resulted in the Thomson PVR10UK, a stand-alone receiver released in October 2000 that was based on the original reference design used in the United States by both Philips and Sony. TiVo ended UK unit sales in January 2003, though it continued to sell subscriptions and supply guide data to existing subscribed units until June 2011. TiVo branded products returned to the UK during 2010 under an exclusive partnership with cable TV provider Virgin Media.
TiVo was launched in Australia in July 2008 by Hybrid Television Services, a company owned by Australia's Seven Media Group and New Zealand's TVNZ. TiVo Australia also launched a TiVo with a 320Gb hard Drive in 2009. TiVo Australia also launched Blockbuster on demand and in December 2009 launched a novel service called Caspa on Demand. TiVo also went on sale in New Zealand on 6 November 2009.
Janet Jackson's Super Bowl halftime show incident on February 1, 2004, set a record for being the most watched, recorded and replayed moment in TiVo history. The baring of one of Jackson's breasts at the end of her duet with Justin Timberlake, which caused a flood of outraged phone calls to CBS, was replayed a record number of times by TiVo users. A company representative stated, "The audience measurement guys have never seen anything like it. The audience reaction charts looked like an electrocardiogram."
In April 2016, Rovi acquired TiVo for $1.1 billion.
In December 2019, it was announced that TiVo would merge with Xperi Corporation. The merger completed in May 2020.
In early February 2024, TiVo removed the antenna version of the TiVo Edge from their website, apparently discontinuing their OTA line of DVRs. The cable version of the TiVo Edge as well as the TiVo Mini LUX and TiVo Stream 4K continue to be available.
TiVo digital video recorder
A TiVo DVR serves a function similar to that of a videocassette recorder (VCR), in that both allow a TV viewer to record programming for viewing at a later time, known as time shifting. Unlike a videocassette recorder, which uses removable magnetic tape cartridges, a TiVo DVR stores TV programs on an internal hard drive, much like a computer.
A TiVo DVR also automatically records programs that the user is likely to be interested in. TiVo DVRs also implement a patented feature that TiVo calls "trick play", allowing the viewer to pause live television and rewind and replay up to 30 minutes of recently viewed TV. TiVo DVRs can be connected to a computer local area network, allowing the TiVo device to download information, access video streaming services such as Netflix or Hulu, as well as music from the Internet.
Functions
TiVo DVRs communicate with TiVo's servers on a regular basis to receive program information updates, including description, regular and guest actors, directors, genres, whether programs are new or repeats, and whether broadcast is in High Definition (HD). Information is updated daily into its program guide from Rovi (Tribune Media Services was used prior to September 2016).
Users can select individual programs to record or a "OnePass" (formerly "Season Pass") to record all episodes of a show. There are options to record First Run Only, First Run and Repeats, or All Episodes. An episode is considered "First Run" if aired within two weeks of that episode's initial air date. OnePasses can also "bookmark" shows from internet streaming video services and show a combined view of recordings and bookmarks.
When users' requests for multiple programs are conflicting, the lower priority program in the OnePass Manager is either not recorded or clipped where times overlap. The lower priority program will be recorded if it is aired later. TiVo DVRs with multiple tuners simultaneously record the top priority programs.
TiVo pioneered recording programs based on household viewing habits; this is called TiVo Suggestions. Users can rate programs from three "thumbs up" to three "thumbs down". TiVo user ratings are combined to create a recommendation, based on what TiVo users with similar viewing habits watch. For example, if one user likes American Idol, America's Got Talent and Dancing with the Stars, then another TiVo user who watched just American Idol might get a recommendation for the other two shows. As of 2023 Tivo Suggestions are no longer supported. The Thumbs Up/Down buttons can no longer be used to rate programs.
The amount of storage capacity for programs is dependent upon the size of the hard drive inside the TiVo; different models have different sized hard drives. When the space is full on the hard drive, the oldest programs are deleted to make space for the newer ones; programs that users flag to not be deleted are kept and TiVo Suggestions are always lowest priority. The recording capacity of a TiVo HD DVR can be expanded with an external hard drive, which can add additional hours of HD recording space and standard definition video recording capacity.
When not recording specific user requests, the current channel is recorded for up to 30 minutes. Dual-tuner models record two channels. This allows users to rewind or pause anything that has been shown in the last thirty minutes — useful when viewing is interrupted. Shows already in progress can be entirely recorded if less than 30 minutes have been shown. Unlike VCRs, TiVo can record and play at the same time. A program can be watched from the beginning even if it is in the middle of being recorded, which is something that VCRs cannot do. Some users take advantage of this by waiting 10 to 15 minutes after a program starts (or is replayed from a recording), so that they can fast forward through commercials. In this way, by the end of the recording viewers are caught up with live television.
Unlike most DVRs, TiVo DVRs are easily connected to home networks, allowing users to schedule recordings on TiVo's website (via TiVo Central Online), and transfer recordings between TiVo units (Multi-Room Viewing (MRV)). Former and now discontinued features included the ability transfer recordings to and from a home computer (TiVoToGo (TTG) transfers), play music and view photos over the network, and access third-party applications written for TiVo's Home Media Engine (HME) API.
TiVo added a number of broadband features, most of which are no longer offered. These include:
Integration with Amazon Video on Demand, Jaman.com and Netflix Watch Instantly, offering users access to thousands of movie titles and television shows right from the comfort of their couch. Additionally, broadband connected to TiVo boxes can access digital photos from Picasa Web Albums or Photobucket. Another popular feature is access to Rhapsody music through TiVo, allowing users to listen to virtually any song from their living room. TiVo also teamed up with One True Media to give subscribers a private channel for sharing photos and video with family and friends. They can also access weather, traffic, Fandango movie listings (including ticket purchases), and music through Live365. In the summer of 2008 TiVo announced the availability of YouTube videos on TiVo.
On 7 June 2006, TiVo began offering TiVoCast, a broadband download service that initially offered content from places such as Rocketboom or, The New York Times; now there are over 70 TivoCast channels available for TiVo subscribers.
TiVo is expanding media convergence. In January 2005, TiVoToGo, a feature allowing transfer of recorded shows from TiVo boxes to PCs, was added. TiVo partnered with Sonic in the release of MyDVD 6.1, software for editing and converting TiVoToGo files. In January, 2007, TiVoToGo was extended to the Macintosh with Toast Titanium 8, Roxio software for assembly and burning digital media on CD and DVD media. In August 2005, TiVo rolled out "TiVo Desktop" allowing moving MPEG2 video files from PCs to TiVo for playback by DVR. As of June 5, 2013, TiVo stopped distributing the free version of TiVo Desktop for PC in favor of selling TiVo Desktop Plus. Users who previously downloaded the free version of TiVo Desktop can continue to use the software without paying a fee for the Plus edition.
Parental features
TiVo KidZone (later removed in the Premiere and Roamio devices) was designed to give parents greater control over what their children see on television. This feature allows parents to choose which shows their children can watch and record. It also helps kids discover new shows through recommendations from leading national children's organizations. TiVo KidZone provides a customized Now Playing List for children that displays only pre-approved shows, keeping television as safe as possible.
Subscription service
The information that a TiVo DVR downloads regarding television schedules, as well as software updates and any other relevant information is available through a monthly service subscription in the United States. A different model applies in Australia, where the TiVo media device is bought for a one-off fee, without further subscription costs.
Lifetime subscription
There are multiple types of Product Lifetime Service. For satellite-enabled TiVo DVRs, the lifetime subscription remains as long as the account is active; the subscription does not follow a specific piece of hardware. This satellite lifetime subscription cannot be transferred to another person. Toshiba and Pioneer TiVo DVD recording equipped units include a "Basic Lifetime Subscription", which is very similar to Full Lifetime, except only three days of the program guide are viewable; and search and Internet capabilities are not available, or at least are limited. All units (except satellite but including DVD units) can have "Product Lifetime Subscription" added to the TiVo service, which covers the life of the TiVo DVR, not the life of the subscriber. The Product Lifetime Subscription accompanies the TiVo DVR in case of ownership-transfer. TiVo makes no warranties or representations as to the expected lifetime of the TiVo DVR (aside from the manufacturer's Limited Warranty). In the past TiVo has offered multiple "Trade Up" programs where you could transfer the Product Lifetime Subscription from an old unit to a newer model with a fee. A TiVo can be used without a service-agreement, but it will act more like a VCR in that you can only perform manual recordings and the TiVo can't be connected to the TiVo service for local time, program guide data, software updates, etc. or TiVo will shut down the recording function.
Service availability
The TiVo service is available in the United States, United Kingdom, Canada, Mexico, Spain and Taiwan at present. Over the years since its initial release in the United States, TiVo Series1 and Series2 DVRs have also been modified by end users to work in Australia, Brazil, Canada, New Zealand, the Netherlands, and South Africa.
TiVo went on sale in New Zealand in the first week of November 2009.
The TiVo Service came to an end in Australia on 31 October 2017. The electronic programming guide and TiVo recording features are no longer available, thus making all TiVo machines in Australia virtually useless.
United Kingdom
The TiVo service was launched in the United Kingdom in the autumn of 2000. It sold only 35,000 units over the next 18 months. Thomson, makers of the only UK TiVo box, abandoned it in early 2002 after BSkyB launched its Sky+ integrated "set-top" decoder and DVR, which dominated the market for DVRs in homes subscribing to BSkyB's paid-for satellite television service. Many manufacturers, including Thomson have launched integrated decoder boxes/DVRs in the UK for other digital platforms, including free satellite, terrestrial, cable and IPTV.
A technical issue caused TiVo Suggestions to stop recording for S1 UK TiVo customers in late September 2008, but this was fixed in late January 2009.
Since December 2010, UK TiVo units that were not already on an active monthly subscription or lifetime subscription could no longer be re-activated. BSkyB who were operating the support for TiVo no longer had full access to the TiVo systems to activate accounts.
The TiVo S1 subscription service was maintained for both lifetime and monthly subscriptions until 1 June 2011. A community project known as AltEPG was established in March 2011 with the aim of providing a replacement for the discontinued subscription service. This project now provides programme guide data and software upgrades for S1 TiVos.
On 24 November 2009, cable television provider Virgin Media entered into a strategic partnership with TiVo. Under the mutually exclusive agreement, TiVo developed a converged television and broadband interactive interface to power Virgin Media's next generation, high definition set top boxes. TiVo will become the exclusive provider of middleware and user interface software for Virgin Media's next generation set top boxes. Virgin Media will be the exclusive distributor of TiVo services and technology in the United Kingdom. Virgin Media released its first TiVo co-branded product in December 2010. On 17 March 2011, Virgin Media enabled access to a third tuner.
As of 12 February 2015, Virgin Media has 2 million TiVo customers, 50% of their TV customers.
Hardware anatomy
The TiVo DVR was designed by TiVo Inc., which currently provides the hardware design and Linux-based TiVo software, and operates a subscription service (without which most models of TiVo will not operate). TiVo units have been manufactured by various OEMs, including Philips, Sony, Cisco, Hughes, Pioneer, Toshiba, and Humax, which license the software from TiVo Inc. To date, there have been six "series" of TiVo units produced.
TiVo DVRs are based on PowerPC (Series1) or MIPS (Series2) processors connected to MPEG-2 encoder/decoder chips and high-capacity IDE/ATA hard drives. Series1 TiVo units used one or two drives of 13–60 GB; Series2 units have drives of 40–250 GB in size. TiVo has also partnered with Western Digital to create an external hard drive, the My DVR Expander, for TiVo HD and Series3 Boxes. It plugs into the TiVo box using an eSATA interface. It expands the High-Definition boxes by up to 67 hours of HD, and around 300 hours of standard programming. Other TiVo users have found many ways to expand TiVo storage, although these methods are not supported by TiVo, and may void the warranty.
Some recent models manufactured by Toshiba, Pioneer, and Humax, under license from TiVo, contain DVD-R/RW drives. The models can transfer recordings from the built-in hard drive to DVD Video compliant disc, playable in most modern DVD systems.
All standalone TiVo DVRs have coax/RF-in and an internal cable-ready tuner, as well as analog video input — composite/RCA and S-Video, for use with an external cable box or satellite receiver. The TiVo unit can use a serial cable or infrared blasters to control the external receiver. They have coax/RF, composite/RCA, and S-Video output, and the DVD systems also have component out. Audio is RCA stereo, and the DVD systems also have digital optical out.
Until 2006, standalone TiVo systems could only record one channel at a time, though a dual-tuner Series2DT (S2DT) box was introduced in April 2006. The S2DT has two internal cable-ready tuners and it supports a single external cable box or satellite receiver. The S2DT is therefore capable of recording two analog cable channels, one analog and one digital cable channel, or one analog cable and one satellite channel at a time, with the correct programming sources. Note, however, that the S2DT, unlike earlier units, cannot record from an antenna. This is due to an FCC mandate that all devices sold after March 2007 with an NTSC tuner must also contain an ATSC tuner. TiVo therefore had to choose between adding ATSC support, or removing NTSC support. With the S2DT they opted to remove NTSC; the Series3 supports NTSC and ATSC, along with digital cable channels (with CableCards).
The Series2 DVRs also have USB ports, currently used only to support network (wired Ethernet and WiFi) adapters. The early Series2 units, models starting with 110/130/140, have USB1.1 hardware, while all other systems have USB2.0. There have been four major generations of Series2 units. The TiVo-branded 1xx and 2xx generations were solid grey-black. The main difference was the upgrade from USB 1.1 to the much faster USB 2.0. The 5xx generation was a new design. The chassis is silver with a white oval in the faceplate. The white oval is backlit, leading to these units being called "Nightlight" boxes. The 5xx generation was designed to reduce costs, and this also caused a noticeable drop in performance in the system menus as well as a large performance drop in network transfers. The 5xx generation also introduced changes in the boot PROM that make them unmodifiable without extensive wiring changes. The 6xx generation resembles the previous 5xx model, except that it has a black oval. The 6xx is a new design and the only model available today is the S2DT with dual tuners and a built-in 10/100baseT Ethernet port as well. The 6xx is the best performing Series2 to date, outperforming even the old leader, the 2xx, and far better than the lowest performing 5xx.
Some TiVo systems are integrated with DirecTV receivers. These "DirecTiVo" recorders record the incoming satellite MPEG-2 digital stream directly to hard disk without conversion. Because of this and the fact that they have two tuners, DirecTiVos are able to record two programs at once. In addition, the lack of digital conversion allows recorded video to be of the same quality as live video. DirecTiVos have no MPEG encoder chip, and can only record DirecTV streams. However, DirecTV has disabled the networking capabilities on their systems, meaning DirecTiVo does not offer such features as multi-room viewing or TiVoToGo. Only the standalone systems can be networked without additional unsupported hacking.
DirecTiVo units (HR10-250) can record HDTV to a 250 GB hard drive, both from the DirecTV stream and over-the-air via a standard UHF- or VHF-capable antenna. They have two virtual tuners (each consisting of a DirecTV tuner paired with an ATSC over-the-air tuner) and, like the original DirecTiVo, can record two programs at once; further, the program guide is integrated between over-the-air and DirecTV so that all programs can be recorded and viewed in the same manner.
In 2005, DirecTV stopped marketing recorders powered by TiVo and focused on its own DVR line developed by its business units. DirecTV continues to support the existing base of DirecTV recorders powered by TiVo.
On 8 July 2006, DirecTV announced an upgrade to version 6.3 on all remaining HR10-250 DirecTiVo receivers, the first major upgrade since this unit was released. This upgrade includes features like program grouping (folders), a much faster on-screen guide, and new sorting features.
In September 2008, DirecTV and TiVo announced that they have extended their current agreement, which includes the development, marketing and distribution of a new HD DIRECTV DVR featuring the TiVo service, as well as the extension of mutual intellectual property arrangements.
Since the discontinued Hughes Electronics DirecTV DVR with TiVo model HR10-250, all TiVo units have been fully HDTV capable. Other TiVo models will only record analog standard definition television (NTSC or PAL/SECAM). The Series3 "TiVo HD, and TiVo HD XL" DVRs and the Series4 "TiVo Premiere and TiVo Premiere XL" DVRs are capable of recording HDTV both from antenna (over the air) and cable (unencrypted QAM tuner or encrypted with a Cable Card) in addition to normal standard definition television from the same sources. Unlike the HR10-250, neither the Series3 nor Series4 units can record from the DirecTV service; conversely, the HR10-250 cannot record from digital cable. Other TiVo models may be connected to a high definition television (HDTV), but are not capable of recording HDTV signals, although they may be connected to a cable HDTV set-top box and record the down-converted outputs.
In 2008, some cable companies started to deploy switched digital video (SDV) technology, which initially was incompatible with the Series3 and TiVo HD units. TiVo Inc worked with cable operators on a tuning-adapter with USB connection to the TiVo to enable SDV. Some MSOs now offer these adapters for free to their customers with TiVo DVRs.
Drive expansion
TiVo has partnered with Western Digital to create an external hard drive, the My DVR Expander eSATA Edition, for TiVo HD and Series3 systems. The external drive plugs into the TiVo box using an eSATA interface. The first version of the eSATA drive shipped was a 500 GB drive that shipped in June 2008. In June 2009 the 1 TB version of the drive began shipping. The 1 TB version expands the TiVo HD and Series3 systems' capacity by up to 140 hours of HD content or 1,200 hours of standard programming.
TiVo was not designed to have an external drive disconnected once it has been added, because data for each recording is spread across both the internal and external disk drives. As a result, it is not possible to disconnect the external drive without deleting content recorded after the external drive was added. If disconnected, any recordings made will not be usable on either the internal or external drives. However, the external drive may be removed (along with content) without losing settings.
Various capacities of external drives have been shipped since the product was initially released. There were reports of product reliability issues, and a brief period of unavailability.
The Western Digital 1 TB and 500 GB My DVR Expander eSATA Edition and My DVR Expander USB Edition drives have been discontinued and replaced with the Western Digital My Book AV DVR Expander 1 TB drive. This drive has received a facelift from the previous generation, which now sports a glossy finish, and a tiny white LED power indicator, along with a push button power switch in the back. The biggest change is that this drive now includes both eSATA and USB in one device. This device is DirectTV, Dish Network, TiVo, Moxi, Pace, and Scientific Atlanta (Cisco) certified. Seagate has come out with their own DVR-oriented drive called the Seagate GoFlex DVR which comes in a 1 TB and 2 TB capacity. TiVo has not approved the Seagate product for use with TiVo DVRs and they will not currently function with any TiVo products.
Hacking
Users have installed additional or larger hard drives in their TiVo boxes to increase their recording capacity. Others have designed and built Ethernet cards and a Web interface (TiVoWeb), and figured out how to extract, insert and transfer video among their TiVo boxes. Other hacks include adding time to the start and end of recording intelligently and sending daily e-mails of the TiVo's activity.
TiVo still uses the same encoding, however, for the media files (saved as .TiVo files). These are MPEG files encoded with the user's Media Access Key (MAK). However, software developers have written programs such as tivodecode and tivodecode Manager to strip the MAK from the file, allowing the user to watch or send the recordings to friends.
TiVo in the cloud
On January 4, 2018, TiVo announced its next-gen platform, a catch-all product for providers like cable companies. It's available for multiple TV devices, including not only Linux- and Android TV-based set-top boxes and traditional DVRs, but also DVR-free streaming devices like Apple TV and Amazon's Fire TV, as well as phones, tablets and PCs. The platform allows providers to take advantage of TiVo's user interface, voice control, personalization and recommendations. TiVo expects its user interface could provide an advantage over competitors such as Netflix, Hulu, and Amazon Video "in a world where cord-cutting is increasingly popular."
In the 2020s, Internet service providers such as TDS and Astound started using set-top boxes with TiVo user interfaces to provide TV and cloud DVR service to their customers. These devices can also run apps for streaming services such as Netflix and Prime Video.
Competitors and market share
While its former main competitor in the United States, ReplayTV, had adopted a commercial-skip feature, TiVo decided to avoid automatic implementation fearing such a move might provoke backlash from the television industry. ReplayTV was sued over this feature as well as the ability to share shows over the Internet, and these lawsuits contributed to the bankruptcy of SONICblue, its owner at the time. Its new owner, DNNA, dropped both features in the final ReplayTV model, the 5500.
After demonstrating the WebTV capability at the same 1999 CES with TiVo and ReplayTV demonstrating their products, Dish (then named Dish Network) a few months later added DVR functionality to their DishPlayer 7100 (and later its 7200) with its Echostar unit producing the hardware while Microsoft provided the software that included WebTV, the same software Microsoft would later use for its UltimateTV DVR for DirecTV. The TiVo, ReplayTV, and DishPlayer 7100 represent very first DVRs that were in development at the same time and were released to market at about the same time.
SONICblue, the owners of ReplayTV would file for bankruptcy after being sued for its ability to automatically skip commercials and other features that were thought to violate copyrights; Echostar (Dish) would eventually sue Microsoft in 2001 for failing to support the software in DishPlayer 7100 and 7200 with Dish ending their relationship with Microsoft and cease offering the DishPlayer 7100/7200 to its subscribers and, instead, produce their own in-house DVR; and DirecTV would eventually drop Microsoft's UltimateTV and keep DirecTiVo as its only DVR offering for quite some time.
Other distributors' competing DVR sets in the United States include Comcast and Verizon, although both distribute third-party hardware from manufacturers such as Motorola and the former Scientific Atlanta unit of Cisco Systems with this functionality built-in. Verizon uses boxes fitted for FiOS, allowing high-speed Internet access and other features. However, TiVo is compatible with the FiOS TV service because when the TV programming arrives at the home via FiOS Fiber to the Home network, it is converted to CableLabs specification QAM channels exactly as those used by cable TV companies. AT&T is an IPTV service that is incompatible with the TiVo.
Despite having gained 234,000 subscribers in the last quarter of 2011, as of January 2012 TiVo had only (approximately) 2.3 million subscribers in the United States. This is down from a peak of 4.36 million in January 2006. As of January 31, 2016, TiVo reported 6.8 million subscribers.
Issues
Privacy concerns
TiVo collects detailed usage data from units via broadband Internet. As units are downloading schedule data, they transmit household viewing habits to TiVo Inc. Collected information includes a log of everything watched (time and channel) and remote keypresses such as fast forwarding through or replaying content. Many users were surprised when TiVo released data on how many users rewatched the exposure of Janet Jackson's breast during the 2004 Super Bowl. TiVo records usage data for their own research and they also sell it to other corporations such as advertisers. Nielsen and TiVo have also previously collaborated to track viewing habits. This data is sold to advertising agencies as a way of documenting the number of viewers watching specific commercials to their corporate clients.
TiVo has three levels of data collection. By default, the user is in "opt-out" status, where all usage data is aggregated by ZIP Code, and individual viewing habits are not tracked. Certain optional features and promotions require the user to opt in, and individual information is then collected for targeted show suggestions or advertising. Users can request that TiVo block the collection of anonymous viewing information and diagnostic information from their TiVo DVR.
Litigation
TiVo holds several patents regarding digital video recorder technology, including one for its "Time Warp" feature, which have been asserted against cable TV operators and competing DVR box makers.
Opposition by content providers
Content flagging
In September 2005, a TiVo software upgrade added the ability for broadcasters to "flag" programs to be deleted after a certain date. Some customers had recordings deleted, or could not use their flagged recordings (transfer to a computer or burn to DVD), as they could with unflagged material. The initial showing of this for random shows was a bug in the software. It later was enabled on pay-per-view and video-on-demand content.
Pop-up advertisements
During early 2005, TiVo began test marketing "pop-up" advertisements to select subscribers, to explore it as an alternative source of revenue. The idea was that as users fast-forward through certain commercials of TiVo advertisers, they would also see a static image ad more suitable and effective than the broken video stream.
At its announcement, the concept of extra advertisements drew heavy criticism from subscribers. Some lifetime subscribers were upset that they had already paid for a service based upon their previous ad-free experience, while others argued that they had purchased the service for the specific purpose of dodging advertisements. In 2007, TiVo made changes to its pop-up ad system to show pop-up ads only if the user fast-forwards through a commercial that has a corresponding pop-up ad.
In 2019, some TiVo DVRs began running "pre-roll advertisements." These ads are short, but mandatory. The ads run before the user can play a recorded program. The ads are downloaded from the Internet, so a brief delay occurs before the mandatory ads begin, further delaying playback. Only TiVo DVRs using TiVo Experience 4 software (Roamio, Bolt, Edge, etc.) have this forced advertising, earlier TiVo software does not deploy pre-roll ads.
GNU General Public License and Tivoization
In 2006, the Free Software Foundation decided to combat TiVo's technical system of blocking users from running modified software.
This behavior, which Richard Stallman dubbed Tivoization, was tackled by creating a new version of the GNU General Public License, the GNU GPLv3, which prohibits this activity.
The kernel of the operating system of TiVo-branded hardware, the Linux kernel, is distributed under the terms of the GNU GPLv2. The FSF's goal is to ensure that all recipients of software licensed under the GPLv3 are not restricted by hardware constraints on the modification of distributed software.
This new license provision was acknowledged by TiVo in its April 2007 SEC filing: "we may be unable to incorporate future enhancements to the GNU/Linux operating system into our software, which could adversely affect our business".
CableCard Support Uncertainty (USA)
In September 2020, the Federal Communications Commission (FCC) changed its rules so that cable television providers no longer must support CableCard. Providers may choose to keep supporting CableCard, but TiVo owners have no assurance. The cable television provider may discontinue CableCard support at any time.
See also
TiVo digital video recorders
References
External links
Digital video recorders
Interactive television
Linux-based devices
Products introduced in 1999
Television terminology
Video storage
Television time shifting technology | TiVo | Technology | 7,150 |
56,566 | https://en.wikipedia.org/wiki/Chronobiology | Chronobiology is a field of biology that examines timing processes, including periodic (cyclic) phenomena in living organisms, such as their adaptation to solar- and lunar-related rhythms. These cycles are known as biological rhythms. Chronobiology comes from the ancient Greek χρόνος (chrónos, meaning "time"), and biology, which pertains to the study, or science, of life. The related terms chronomics and chronome have been used in some cases to describe either the molecular mechanisms involved in chronobiological phenomena or the more quantitative aspects of chronobiology, particularly where comparison of cycles between organisms is required.
Chronobiological studies include but are not limited to comparative anatomy, physiology, genetics, molecular biology and behavior of organisms related to their biological rhythms. Other aspects include epigenetics, development, reproduction, ecology and evolution.
The subject
Chronobiology studies variations of the timing and duration of biological activity in living organisms which occur for many essential biological processes. These occur (a) in animals (eating, sleeping, mating, hibernating, migration, cellular regeneration, etc.), (b) in plants (leaf movements, photosynthetic reactions, etc.), and in microbial organisms such as fungi and protozoa. They have even been found in bacteria, especially among the cyanobacteria (aka blue-green algae, see bacterial circadian rhythms). The best studied rhythm in chronobiology is the circadian rhythm, a roughly 24-hour cycle shown by physiological processes in all these organisms. The term circadian comes from the Latin circa, meaning "around" and dies, "day", meaning "approximately a day." It is regulated by circadian clocks.
The circadian rhythm can further be broken down into routine cycles during the 24-hour day:
Diurnal, which describes organisms active during daytime
Nocturnal, which describes organisms active in the night
Crepuscular, which describes animals primarily active during the dawn and dusk hours (ex: domestic cats, white-tailed deer, some bats)
While circadian rhythms are defined as regulated by endogenous processes, other biological cycles may be regulated by exogenous signals. In some cases, multi-trophic systems may exhibit rhythms driven by the circadian clock of one of the members (which may also be influenced or reset by external factors). The endogenous plant cycles may regulate the activity of the bacterium by controlling availability of plant-produced photosynthate.
Many other important cycles are also studied, including:
Infradian rhythms, which are cycles longer than a day. Examples include circannual or annual cycles that govern migration or reproduction cycles in many plants and animals, or the human menstrual cycle.
Ultradian rhythms, which are cycles shorter than 24 hours, such as the 90-minute REM cycle, the 4-hour nasal cycle, or the 3-hour cycle of growth hormone production.
Tidal rhythms, commonly observed in marine life, which follow the roughly 12.4-hour transition from high to low tide and back.
Lunar rhythms, which follow the lunar month (29.5 days). They are relevant e.g. for marine life, as the level of the tides is modulated across the lunar cycle.
Gene oscillations – some genes are expressed more during certain hours of the day than during other hours.
Within each cycle, the time period during which the process is more active is called the acrophase. When the process is less active, the cycle is in its bathyphase or trough phase. The particular moment of highest activity is the peak or maximum; the lowest point is the nadir.
History
A circadian cycle was first observed in the 18th century in the movement of plant leaves by the French scientist Jean-Jacques d'Ortous de Mairan. In 1751 Swedish botanist and naturalist Carl Linnaeus (Carl von Linné) designed a flower clock using certain species of flowering plants. By arranging the selected species in a circular pattern, he designed a clock that indicated the time of day by the flowers that were open at each given hour. For example, among members of the daisy family, he used the hawk's beard plant which opened its flowers at 6:30 am and the hawkbit which did not open its flowers until 7 am.
The 1960 symposium at Cold Spring Harbor Laboratory laid the groundwork for the field of chronobiology.
It was also in 1960 that Patricia DeCoursey invented the phase response curve, one of the major tools used in the field since.
Franz Halberg of the University of Minnesota, who coined the word circadian, is widely considered the "father of American chronobiology." However, it was Colin Pittendrigh and not Halberg who was elected to lead the Society for Research in Biological Rhythms in the 1970s. Halberg wanted more emphasis on the human and medical issues while Pittendrigh had his background more in evolution and ecology. With Pittendrigh as leader, the Society members did basic research on all types of organisms, plants as well as animals. More recently it has been difficult to get funding for such research on any other organisms than mice, rats, humans and fruit flies.
The role of Retinal Ganglion cells
Melanopsin as a circadian photopigment
In 2002, Hattar and his colleagues showed that melanopsin plays a key role in a variety of photic responses, including pupillary light reflex, and synchronization of the biological clock to daily light-dark cycles. He also described the role of melanopsin in ipRGCs. Using a rat melanopsin gene, a melanopsin-specific antibody, and fluorescent immunocytochemistry, the team concluded that melanopsin is expressed in some RGCs. Using a Beta-galactosidase assay, they found that these RGC axons exit the eyes together with the optic nerve and project to the suprachiasmatic nucleus (SCN), the primary circadian pacemaker in mammals. They also demonstrated that the RGCs containing melanopsin were intrinsically photosensitive. Hattar concluded that melanopsin is the photopigment in a small subset of RGCs that contributes to the intrinsic photosensitivity of these cells and is involved in their non-image forming functions, such as photic entrainment and pupillary light reflex.
Melanopsin cells relay inputs from rods and cones
Hattar, armed with the knowledge that melanopsin was the photopigment responsible for the photosensitivity of ipRGCs, set out to study the exact role of the ipRGC in photoentrainment. In 2008, Hattar and his research team transplanted diphtheria toxin genes into the mouse melanopsin gene locus to create mutant mice that lacked ipRGCs. The research team found that while the mutants had little difficulty identifying visual targets, they could not entrain to light-dark cycles. These results led Hattar and his team to conclude that ipRGCs do not affect image-forming vision, but significantly affect non-image forming functions such as photoentrainment.
Distinct ipRGCs
Further research has shown that ipRGCs project to different brain nuclei to control both non-image forming and image forming functions. These brain regions include the SCN, where input from ipRGCs is necessary to photoentrain circadian rhythms, and the olivary pretectal nucleus (OPN), where input from ipRGCs control the pupillary light reflex. Hattar and colleagues conducted research that demonstrated that ipRGCs project to hypothalamic, thalamic, stratal, brainstem and limbic structures. Although ipRGCs were initially viewed as a uniform population, further research revealed that there are several subtypes with distinct morphology and physiology. Since 2011, Hattar's laboratory has contributed to these findings and has successfully distinguished subtypes of ipRGCs.
Diversity of ipRGCs
Hattar and colleges utilized Cre-based strategies for labeling ipRGCs to reveal that there are at least five ipRGC subtypes that project to a number of central targets. Five classes of ipRGCs, M1 through M5, have been characterized to date in rodents. These classes differ in morphology, dendritic localization, melanopsin content, electrophysiological profiles, and projections.
Diversity in M1 cells
Hattar and his co-workers discovered that, even among the subtypes of ipRGC, there can be designated sets that differentially control circadian versus pupillary behavior. In experiments with M1 ipRGCs, they discovered that the transcription factor Brn3b is expressed by M1 ipRGCs that target the OPN, but not by ones that target the SCN. Using this knowledge, they designed an experiment to cross Melanopsin-Cre mice with mice that conditionally expressed a toxin from the Brn3b locus. This allowed them to selectively ablate only the OPN projecting M1 ipRGCS, resulting in a loss of pupil reflexes. However, this did not impair circadian photo entrainment. This demonstrated that the M1 ipRGC consist of molecularly distinct subpopulations that innervate different brain regions and execute specific light-induced functions. This isolation of a 'labeled line' consisting of differing molecular and functional properties in a highly specific ipRGC subtype was an important first for the field. It also underscored the extent to which molecular signatures can be used to distinguish between RGC populations that would otherwise appear the same, which in turn facilitates further investigation into their specific contributions to visual processing.
Psychological impact of light exposure
Previous studies in circadian biology have established that exposure to light during abnormal hours leads to sleep deprivation and disruption of the circadian system, which affect mood and cognitive functioning. While this indirect relationship had been corroborated, not much work had been done to examine whether there was a direct relationship between irregular light exposure, aberrant mood, cognitive function, normal sleep patterns and circadian oscillations. In a study published in 2012, the Hattar Laboratory was able to show that deviant light cycles directly induce depression-like symptoms and lead to impaired learning in mice, independent of sleep and circadian oscillations.
Effect on mood
ipRGCs project to areas of the brain that are important for regulating circadian rhythmicity and sleep, most notably the SCN, subparaventricular nucleus, and the ventrolateral preoptic area. In addition, ipRGCs transmit information to many areas in the limbic system, which is strongly tied to emotion and memory. To examine the relationship between deviant light exposure and behavior, Hattar and his colleagues studied mice exposed to alternating 3.5-hour light and dark periods (T7 mice) and compared them with mice exposed to alternating 12-hour light and dark periods (T24 mice). Compared to a T24 cycle, the T7 mice got the same amount of total sleep and their circadian expression of PER2, an element of the SCN pacemaker, was not disrupted. Through the T7 cycle, the mice were exposed to light at all circadian phases. Light pulses presented at night lead to expression of the transcription factor c-Fos in the amygdala, lateral habenula, and subparaventricular nucleus further implicating light's possible influence on mood and other cognitive functions.
Mice subjected to the T7 cycle exhibited depression-like symptoms, exhibiting decreased preference for sucrose (sucrose anhedonia) and exhibiting more immobility than their T24 counterparts in the forced swim test (FST). Additionally, T7 mice maintained rhythmicity in serum corticosterone, however the levels were elevated compared to the T24 mice, a trend that is associated with depression. Chronic administration of the antidepressant Fluoxetine lowered corticosterone levels in T7 mice and reduced depression-like behavior while leaving their circadian rhythms unaffected.
Effect on learning
The hippocampus is a structure in the limbic system that receives projections from ipRGCs. It is required for the consolidation of short-term memories into long-term memories as well as spatial orientation and navigation. Depression and heightened serum corticosterone levels are linked to impaired hippocampal learning. Hattar and his team analyzed the T7 mice in the Morris water maze (MWM), a spatial learning task that places a mouse in a small pool of water and tests the mouse's ability to locate and remember the location of a rescue platform located just below the waterline. Compared to the T24 mice, the T7 mice took longer to find the platform in subsequent trials and did not exhibit a preference for the quadrant containing the platform. In addition, T7 mice exhibited impaired hippocampal long-term potentiation (LTP) when subjected to theta burst stimulation (TBS). Recognition memory was also affected, with T7 mice failing to show preference for novel objects in the novel object recognition test.
Necessity of ipRGCs
Mice without (Opn4aDTA/aDTA mice) are not susceptible to the negative effects of an aberrant light cycle, indicating that light information transmitted through these cells plays an important role in regulation of mood and cognitive functions such as learning and memory.
Research developments
Light and melatonin
More recently, light therapy and melatonin administration have been explored by Alfred J. Lewy (OHSU), Josephine Arendt (University of Surrey, UK) and other researchers as a means to reset animal and human circadian rhythms. Additionally, the presence of low-level light at night accelerates circadian re-entrainment of hamsters of all ages by 50%; this is thought to be related to simulation of moonlight.
In the second half of 20th century, substantial contributions and formalizations have been made by Europeans such as Jürgen Aschoff and Colin Pittendrigh, who pursued different but complementary views on the phenomenon of entrainment of the circadian system by light (parametric, continuous, tonic, gradual vs. nonparametric, discrete, phasic, instantaneous, respectively).
Chronotypes
Humans can have a propensity to be morning people or evening people; these behavioral preferences are called chronotypes for which there are various assessment questionnaires and biological marker correlations.
Mealtimes
There is also a food-entrainable biological clock, which is not confined to the suprachiasmatic nucleus. The location of this clock has been disputed. Working with mice, however, Fuller et al. concluded that the food-entrainable clock seems to be located in the dorsomedial hypothalamus. During restricted feeding, it takes over control of such functions as activity timing, increasing the chances of the animal successfully locating food resources.
Diurnal patterns on the Internet
In 2018 a study published in PLoS ONE showed how 73 psychometric indicators measured on Twitter Content follow a diurnal pattern.
A followup study appeared on Chronobiology International in 2021 showed that these patterns were not disrupted by the 2020 UK lockdown.
Modulators of circadian rhythms
In 2021, scientists reported the development of a light-responsive days-lasting modulator of circadian rhythms of tissues via Ck1 inhibition. Such modulators may be useful for chronobiology research and repair of organs that are "out of sync".
Other fields
Chronobiology is an interdisciplinary field of investigation. It interacts with medical and other research fields such as sleep medicine, endocrinology, geriatrics, sports medicine, space medicine, psychiatry and photoperiodism.
See also
Bacterial circadian rhythms
Biological clock (aging)
Circadian rhythm
Circannual cycle
Circaseptan, 7-day biological cycle
Familial sleep traits
Frank A. Brown, Jr.
Hitoshi Okamura
Light effects on circadian rhythm
Photoperiodism
Suprachiasmatic nucleus
Scotobiology
Time perception
Malcolm von Schantz
References
Further reading
Hastings, Michael, "The brain, circadian rhythms, and clock genes". Clinical review" BMJ 1998;317:1704-1707 19 December.
U.S. Congress, Office of Technology Assessment, "Biological Rhythms: Implications for the Worker". U.S. Government Printing Office, September 1991. Washington, DC. OTA-BA-463. NTIS PB92-117589
Ashikari, M., Higuchi, S., Ishikawa, F., and Tsunetsugu, Y., "Interdisciplinary Symposium on 'Human Beings and Environments': Approaches from Biological Anthropology, Social Anthropology and Developmental Psychology". Sunday, 25 August 2002
"Biorhythm experiment management plan", NASA, Ames Research Center. Moffett Field, 1983.
"Biological Rhythms and Human Adaptation to the Environment". US Army Medical Research and Materiel Command (AMRMC), US Army Research Institute of Environmental Medicine.
Ebert, D., K.P. Ebmeier, T. Rechlin, and W.P. Kaschka, "Biological Rhythms and Behavior", Advances in Biological Psychiatry. ISSN 0378-7354
Horne, J.A. (Jim) & Östberg, Olov (1976). A Self-Assessment Questionnaire to determine Morningness-Eveningness in Human Circadian Rhythms. International Journal of Chronobiology, 4, 97–110.
Roenneberg, Till, Cologne (2010). Wie wir ticken – Die Bedeutung der Chronobiologie für unser Leben, Dumont, .
The Linnean Society of London
External links
Halberg Chronobiology Center at the University of Minnesota, founded by Franz Halberg, the "Father of Chronobiology"
The University of Virginia offers an online tutorial on chronobiology.
See the Science Museum of Virginia publication Can plants tell time?
The University of Manchester has an informative Biological Clock Web Site
S Ertel's analysis of Chizhevsky's work
Biological processes
Circadian rhythm
Neuroscience | Chronobiology | Biology | 3,781 |
10,145,584 | https://en.wikipedia.org/wiki/Network%20delay | Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. It is typically measured in multiples or fractions of a second. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several parts:
Processing delay time it takes a router to process the packet header
Queuing delay time the packet spends in routing queues
Transmission delay time it takes to push the packet's bits onto the link
Propagation delay time for a signal to propagate through the media
A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from a few milliseconds to several hundred milliseconds.
See also
Age of Information
End-to-end delay
Lag (video games)
Latency (engineering)
Minimum-Pairs Protocol
Round-trip delay
References
External links
Computer networking | Network delay | Technology,Engineering | 237 |
31,546,068 | https://en.wikipedia.org/wiki/List%20of%20RAM%20drive%20software | RAM drive software allows part of a computer's RAM (memory) to be seen as if it were a disk drive, with volume name and, if supported by the operating system, drive letter. A RAM drive has much faster read and write access than a hard drive with rotating platters, and is volatile, being destroyed with its contents when a computer is shut down or crashes—volatility is an advantage if security requires sensitive data to not be stored permanently, and to prevent accumulation of obsolete temporary data, but disadvantageous where a drive is used for faster processing of needed data. Data can be copied between conventional mass storage and a RAM drive to preserve it on power-down and load it on start-up.
Overview
Features
Features that vary from one package to another:
Some RAM drives automatically back up contents on normal mass storage on power-down, and load them when the computer is started. If this functionality is not provided, contents can always be preserved by start-up and close-down scripts, or manually if the operator remembers to do so.
Some software allows several RAM drives to be created; other programs support only one.
Some RAM drives when used with 32-bit operating systems (particularly 32-bit Microsoft Windows) on computers with IBM PC architecture allow memory above the 4 GB point in the memory map, if present, to be used; this memory is unmanaged and not normally accessible. Software using unmanaged memory can cause stability problems.
Specifically in IBMPC based 32-bit operating systems, some RAM drives are able to use any 'unmanaged' or 'invisible' RAM below 4 GB in the memory map (known as the 3 GB barrier) i.e. RAM in the 'PCI hole'. Note: Do not assume that RAM drives supporting 'AWE' (or Address Windowing Extensions) memory above 4 GB will also support unmanaged PAE (or Physical Address Extension) memory below 4 GB—most don't.
FreeBSD
md – memory disk
This driver provides support for four kinds of memory backed virtual disks: malloc, preload, vnode, swap. Disks may be created with the next command line tools: mdconfig and mdmfs. An example of how to use these programs follows.
To create and mount memory disk with mdmfs:
# mdmfs -F newimage -s 5m md0 /mnt
To create and mount memory disk with mdconfig:
# mdconfig -a -t swap -s 5m -u 0
# newfs -U md0
# mount /dev/md0 /mnt
To destroy previously created disk:
# umount /mnt
# mdconfig -d -u 0
Linux
shm
Modern Linux systems come pre-installed with a user-accessible ramdisk mounted at /dev/shm.
RapidDisk
RapidDisk is a free and open source project containing a Linux kernel module and administration utility that functions similar to the Ramdiskadm of the Solaris (operating system). With the rxadm utility, the user is capable of dynamically attaching, removing, and resizing RAM disk volumes and treat them like any other block device.
RAMDisk
Free and open-source utility that allows using RAM as a folder.
tmpfs and ramfs
An example of how to use tmpfs and ramfs in a Linux environment is as follows:
$ mkdir /var/ramdisk
Once the mount point is identified the mount command can be used to mount a tmpfs and ramfs file system on top of that mount point:
$ mount -t tmpfs none /var/ramdisk -o size=28m
Now each time /var/ramdisk is accessed all reads and writes will be coming directly from memory.
There are 2 differences between tmpfs and ramfs.
1) the mounted space of ramfs is theorically infinite, as ramfs will grow if needed, which can easily cause system lockup or crash for using up all available memory, or start heavy swapping to free up more memory for the ramfs. For this reason limiting the size of a ramfs area can be recommendable.
2) tmpfs is backed by the computer's swap space
There are also many "wrappers" for the RAM disks for Linux as Profile-sync-daemon (psd) and many others allowing users to utilize RAM disk for desktop application speedup moving intensive IO for caches into RAM.
Microsoft Windows
Non-proprietary
ImDisk
ImDisk Virtual Disk Driver is a disk image emulator created by Olof Lagerkvist. It is free and open-source software, and is available in 32- and 64-bit variants. It is digitally signed, which makes it compatible with 64-bit versions of Microsoft Windows without having to be run in Test mode. The 64-bit version has no practical limit to the size of RAM disk that may be created.
ImDisk Toolkit is a third-party, free and open-source software that embeds the ImDisk Virtual Disk Driver and adds several features.
ERAM
ERAM is an open source driver that supports making a drive that is up to 4 GB of the total amount of RAM, uses paged/ non-paged memory and supports backing up the drive to an image. It works on Windows XP/ NT/ 2000/ 7/ 10 (32 and 64-bit). Its driver and source code can be found by going to https://github.com/Zero3K/ERAM.
Proprietary
AMD Radeon RAMDisk
AMD Radeon RAMDisk is available in free versions (RAM drive up to 4 GB, or 6 GB with AMD memory), and commercial versions for drives up to 64 GB. The free version is 'advertising supported'. Creates only a single drive (does not support multiple RAM drives). Can be backed up periodically to hard drive, and automatically loaded when the computer is started. AMD Radeon RAMDisk is a rebranded version of Dataram RAMDisk.
Dataram RAMDisk
Dataram's RAMDisk is freeware (up to 1 GB (reduced from 4 to 1GB - per October 2015 site visit) disk size) and was originally developed and marketed by John Lajoie through his private consulting company until 2001, when he sold his rights to Cenatek, before being acquired by Dataram. RAM disks larger than 4 GB require registration and a USD $18.99 single-user license. When purchasing physical RAM from Dataram, the RAMDisk license is provided free of charge. (Per DATARAM Government Sales on 4/25/2014, this is no longer the case.) Compatible with all 32-bit and 64-bit versions of Windows 10, Windows 8, Windows 7, Windows Vista, Windows XP, Windows Server 2008, and Windows Server 2003.
Dimmdrive RAMDisk
A RAMdisk built specifically for gamers which features real-time file-synchronization, Steam integration, "USB3 Turbo Mode". The interface was designed to support both technical and non-technical game enthusiasts. Cost is $29 at Dimmdrive.com and $30 on Steam. ($14.99 on Steam as of 2018)
Gavotte RamDisk
Can use Physical Address Extension to create a virtual disk in memory normally inaccessible to 32-bit versions of Microsoft Windows (both memory above the 4 GB point, and memory in the PCI hole). There is also an open source plugin that replaces the RAM drive on Bart's PE Builder with one based on Gavotte's rramdisk.sys.
Gilisoft RAMDisk
RAMDisk software for Windows 2000/2003/XP/Vista/Windows 7 (x32 & x64)/Windows 10 with simple setup, permits mounting-and-unmounting of RAMDisk images to/from drive-image-files, along with automated/convenient startup/shutdown features, $25.
Gizmo Central
Gizmo Central is a freeware program that can create and mount virtual disk files. It also has the ability to create a RAM disk up to 4GB in size as Gizmo is a 32 bit program.
Passmark OSFMount
Passmark's OSFMount supports the creation of RAM disks, and also allows you to mount local disk image files (bit-for-bit copies of a disk partition) in Windows with a drive letter. OSFMount is a free utility designed for use with PassMark OSForensics.
Primo Ramdisk
Romex Software Providing a fancy interface which is working with all windows environments from (XP to windows 10) and all windows servers editions from (2003 to 2019 currently) supports up to 128 Disks up to 32GB for Pro Version and 1TB for Ultimate and Server editions, supports to use invisible Memory in 32bit versions of windows, with saving at shut down or hibernate, Paid and trial versions available
SoftPerfect RAM Disk
Available for Windows 7 to 11, or Windows Server from 2008 R2 to 2022; 32/64-bit x86 or 64-bit ARM. SoftPerfect RAM Disk can access memory available to Windows, i.e. on 32-bit systems it is limited to the same 4 GB as the 32-bit Windows itself, otherwise for physical memory beyond 4 GB it must be installed on 64-bit Windows. Multiple RAM disks can be created, and these can optionally be made persistent by automatically saving contents to and restoring from a disk image file. Version 3.4.8 and earlier didn't require a license for home (non-commercial) users.
StarWind Software Virtual RAM Drive Emulator
StarWind Software makes a freeware RAM disk software for mounting memory as actual drives within Windows. Both x86 and x64 versions exist.
Ultra RamDisk
RAMDisk software which can also mount various CD images formats, like iso, ooo, cue, ccd, nrg, mds, img. The application has two versions, paid and free where the latter allows to create a single ram disk up to 2GB in size.
VSuite Ramdisk
The Free Edition (limited to Windows 32-bit Win2000 / XP / 2003) is able to use 'invisible' RAM in the 3.25 to 4 GB 'gap' (if your motherboard has i946 or above chipset) & is also capable of 'saving to hard disk on power down' (so, in theory, allows you to use the RAM disk for Windows XP swap file and survive over a 'Hibernate'). Whilst the free edition allows multiple RAM disk drives to be set up, the total of all drives is limited to 4096 MB. The current version, VSuite Ramdisk II, has been rebranded as 'Primo Ramdisk', all versions of which are chargeable.
WinRamTech (QSoft) Ramdisk Enterprise
An affordable RAM Disk compatible with all Windows Workstation and Server OS versions (32- and 64-bit) starting from Windows 2000. The content of the RAM Disk can be made 'persisted' i.e. saved to an image file on the hard disk at regular times and/or at shutdown, and restored from the same image file at boot time. Because of the built-in disk format routines and the built-in load of the image file, the ram disk drive is already fully accessible at the boot stage where Services and automatically started programs are launched. Concurrent running benchmarks of two ram disks at the same time reveal that this ram disk is almost the fastest. Although the development of this ram disk has ended in 2017, version 5.3.2.15 runs on Windows 10/11 and by this, may still be purchased. The free 64bit 256 MB restricted evaluation version never expires. The company provides OEM personalized 64-bit 5.3.2.15 versions for Windows 10/11 ( unlimited site license )
Microsoft source code
Ramdisk.sys sample driver for Windows 2000
Microsoft Windows offers a 'demonstration' RAM disk for Windows 2000 as part of the Windows Driver Kit. Limited to using the same physical RAM as the operating system. It is available as free download with source code.
RAMDisk sample for Windows 7/8
Microsoft provides source code for a RAM disk driver for Windows 7 and 8
Native
Windows also has a rough analog to tmpfs in the form of "temporary files". Files created with both FILE_ATTRIBUTE_TEMPORARY and FILE_FLAG_DELETE_ON_CLOSE are held in memory and only written to disk if the system experiences high memory pressure. In this way they behave like tmpfs, except the files are written to the specified path during low memory situations, rather than to swap space. This technique is often used by servers along with TransmitFile to render content to a buffer before sending to the client.
Solaris
Ramdiskadm
Ramdiskadm is a utility found in the Solaris (operating system) to dynamically add and destroy ramdisk volumes of any user defined sizes. An example of how to use ramdiskadm to add a new RAM disk in a Solaris environment is as follows:
$ ramdiskadm -a ramdisk1 100m
To destroy the RAM disk:
$ ramdiskadm -d ramdisk1
All created RAM disks can be accessed from the /dev/ramdisk directory path and treated like any other block device; that is, accessed like a physical block device, labeled with a file system and mounted, to even be used in a ZFS pool.
DOS
FreeDOS includes SRDISK
MS-DOS 3.2 includes RAMDRIVE.SYS
PC DOS 3.0 includes VDISK.SYS
DR-DOS included VDISK.SYS
Multiuser DOS included an automatic RAM disk as drive M:
References
External links
12 RAM Disk Software Benchmarked for Fastest Read and Write Speed
RAM Disk technology: Performance Comparison
Are RAM Drives Faster Than SSDs? 5 Things You Must Know
RAM drive software | List of RAM drive software | Technology | 2,916 |
14,213,149 | https://en.wikipedia.org/wiki/Natal%20homing | Natal homing, or natal philopatry, is the homing process by which some adult animals that have migrated away from their juvenile habitats return to their birthplace to reproduce. This process is primarily used by aquatic animals such as sea turtles and salmon, although some migratory birds and mammals also practice similar reproductive behaviors. Scientists believe that the main cues used by the animals are geomagnetic imprinting and olfactory cues. The benefits of returning to the precise location of an animal's birth may be largely associated with its safety and suitability as a breeding ground. When seabirds like the Atlantic puffin return to their natal breeding colony, which are mostly on islands, they are assured of a suitable climate and a sufficient lack of land-based predators.
Sea turtles born in any one area differ genetically from turtles born in other areas. The newly hatched young head out to sea and soon find suitable feeding grounds, and it has been shown that it is to these feeding areas that they return rather than to the actual beach on which they started life. Salmon start their lives in freshwater streams and eventually travel down-river and are washed out to sea. Their ability to travel back, several years later, to the river system in which they were spawned is thought to be linked to olfactory cues, the "taste" of the water. Atlantic bluefin tuna spawn on both the east and west shores of the Atlantic Ocean but intermingle as they feed in mid-ocean. Juvenile tuna that have been tagged have clearly shown that they almost invariably return to the side of the Atlantic on which they were spawned.
Various theories have been put forward as to how the animals find their way home. The geomagnetic imprinting hypothesis holds that they are imprinted with the unique magnetic field that exists in their natal area. This is a plausible theory but has not been proven to occur. Pacific salmon are known to be imprinted on the water chemistry of their home river, a fact that has been confirmed experimentally. They may use geomagnetic information to get close to the coast and then pick up the olfactory cues. Some animals may make navigational errors and end up in the wrong location. If they successfully breed in these new sites, the animal will have widened its breeding base which may ultimately increase the species' chances of survival. Other, unknown means of navigation may be involved, and further research is needed.
Sea turtles
There are several different kinds of marine animals that demonstrate natal homing. The most commonly known is the sea turtle. Loggerhead sea turtles are thought to show two different types of homing. The first of which comes in the early stages of life. When first heading out to sea, the animals are carried out by tides and currents with little swimming involved. Recent studies now show that the animals demonstrate homing to feeding grounds near their natal birthplace.
Turtles of a specific natal beach show differences in their mitochondrial DNA haplotypes that distinguish them from turtles of other nesting areas. Many turtles from the same beaches show up at the same feeding areas. Once reaching sexual maturity in the Atlantic Oceans, the female Loggerhead makes the long trip back to her natal beach to lay her eggs. The Loggerhead sea turtle in the North Atlantic cover more than 9,000 miles round trip to lay eggs on the North American shore.
Salmon
The migration of North Pacific Salmon from the ocean to their freshwater spawning habitat is one of the most extreme migrations in the animal kingdom. The life cycle of a salmon begins in a freshwater stream or river that dumps into the ocean. After spending four or five years in the ocean and reaching sexual maturity, many salmon return to the same streams they were born in to spawn. There are several hypotheses on how salmon are able to do this.
One hypothesis is that they use both chemical and geomagnetic cues that allow them to return to their birthplace. The Earth's magnetic field may help the fish navigate the ocean to find the spawning region. From there, the animal locates where the river dumps into the sea with the chemical cues unique to the fish's natal stream.
Other hypotheses rely on the fact that salmon have an extremely strong sense of smell. One hypothesis states that salmon retain an imprint of the odor of their natal stream as they are migrating downstream. Using this memory of the odor, they are able to return to the same stream years later.
Another smell-related hypothesis states that the young salmon release a pheromone as they migrate downstream, and are able to return the same stream years later by smelling the pheromone they released.
Bluefin tuna
Atlantic bluefin tuna spawn on both the east and west shores of the Atlantic Ocean. When a bluefin tuna hatches, there is a chemical imprint in the animal's otoliths based on the water's chemical properties. Fish born in different regions will show clear differences here. Studies of the commercial fishing industry in the United States show that the population of bluefin tuna in the North Atlantic is made up of fish hailing from both coasts. While the fish may live in close proximity out in the Atlantic, they return to their natal region to spawn. Electronic tagging done over several years showed that 95.8 percent of the yearlings tagged in the Mediterranean Sea returned there to spawn. Results for the Gulf of Mexico were 99.3 percent. With the overfishing of this species, scientists have much to learn about their spawning habits in order to sustain the population for both a reliable food source and a healthy ecosystem.
Atlantic puffins
Atlantic puffins spend the winter at sea and then return to the places of their birth, as has been shown by ringing birds. The breeding sites are usually inhospitable clifftops and uninhabited islands. Birds that were removed as chicks and released elsewhere were found to show fidelity to their point of liberation rather than to their birthplace.
Navigational tools
Geomagnetic imprinting
One idea about how animals accomplish natal homing is that they imprint on the unique magnetic field that exists in their natal area and then use this information to return years later. This idea is known as the "geomagnetic imprinting hypothesis" The concept was developed in a 2008 paper that sought to explain how sea turtles and salmon can return to their home areas after migrating hundreds or thousands of kilometers away
In animal behavior, the term "imprinting" refers to a special type of learning. Exact definitions of imprinting vary, but important aspects of the process include the following: (1) the learning occurs during a particular, critical period, usually early in the life of the animal; (2) the effects last a long time; and (3) the effects cannot be easily modified. For natal homing, the concept is that animals like sea turtles and salmon imprint on the magnetic field of their home area when young, and then use this information to return years later.
Geomagnetic imprinting has not been proven to occur, but it appears to be plausible for several reasons. The earth's magnetic field varies across the globe in such a way that different geographic areas have different magnetic fields associated with them. Also, sea turtles have a well-developed magnetic sense and can detect both the intensity (strength) of the Earth's field as well as the inclination angle (angle at which the field lines intersect the earth's surface). Thus, it is plausible that sea turtles, and maybe salmon also, can recognize their home areas using the distinctive magnetic fields that exist there.
Chemical cues and olfactory imprinting
Pacific salmon are known to imprint on the chemical signature of their home river. This information helps salmon find their home river once they reach the coast from the open sea. In most cases, chemical cues from rivers are not thought to extend very far out into the ocean. Thus, salmon probably use two different navigational systems in sequence when they migrate from the open sea to their spawning grounds. The first one, possibly based on the earth's magnetic field (see Geomagnetic Imprinting above), is used in the open ocean and probably brings salmon close to their home river. Once they are close to the home river, salmon can use olfactory (chemical) signals to find their spawning area.
Many of the classical studies demonstrating olfactory imprinting in salmon were carried out by Arthur Hasler and his colleagues. In one particularly famous experiment, young salmon were imprinted with artificial chemicals and were released into the wild to perform their normal migrations. Almost all of the young fish returned to the same stream that had also been artificially imprinted with the same chemicals, proving that the fish do use chemical cues to return to their natal region.
Effect of thermal pollution on natal homing (chum salmon)
Thermal pollution, which refers to the degradation of water quality by changing the ambient water temperature, has a serious effect on natal homing of chum salmon. Chum salmon is a typical cold water fish that prefer water around . When water temperature is raised due to thermal pollution, chum salmon tends to dive into deep water for thermoregulation. This reduces the time chum salmon spent in surface water column and reduce the chance for chum salmon to approach natal river since the chemical cue for natal homing is concentrated on surface water.
Evolution
It has been studied and recorded by scientists that at a beach in eastern Mexico, where Kemp's ridley turtles nest, a navigational error from the inclination angle over a period of one decade would lead the turtles only within an average of from their natal region. Other locations resulted in navigational errors of over one hundred kilometers in the same period of time. Results from this study show that the navigational tool of geomagnetic imprinting is believed to only navigate the marine animals close to where they were born and then the animals rely on chemical cues of the tributaries and rivers to direct them to back to their birthplace.
These navigational errors have actually strengthened the evolutionary trait of natal homing for marine animals by resulting in some animals straying from their birthplace. Most animals return to their natal region because they know it is a safe place to lay their eggs. These regions will usually have few predators, the correct temperature and climate, and will have the right type of sand for turtles because they cannot lay eggs in wet and muddy environments.
The few animals that do not return to their natal region and stray to other places to reproduce will provide the species with a variety of different locations of reproduction, so if the original natal locations have changed, the species will have expanded to more places and will ultimately increase the species' survival chances.
Future research
Although scientists have been studying marine animals that perform natal homing for years, they are still not positive that geomagnetic imprinting and chemical cues are the only navigational tools they use for their incredible migrations. There is still much more research to be done until scientists can fully understand how these animals can travel such great distances to reproduce. Fortunately, as technology has progressed, there are several tools now available to scientists such as data loggers equipped with magnetometers that can easily be attached to the animals. Not only do they give data showing the animal relative to the Earth's magnetic field, but some also give latitude based on this, longitude based on light levels, temperature, depth, etc. Pop-up satellite archival tags are used to gather data and have the ability to transfer this data via Argos System satellites to the scientist.
See also
Philopatry
Salmon run
Notes
References
Ethology | Natal homing | Biology | 2,332 |
474,372 | https://en.wikipedia.org/wiki/Two-dimensional%20gel%20electrophoresis | Two-dimensional gel electrophoresis, abbreviated as 2-DE or 2-D electrophoresis, is a form of gel electrophoresis commonly used to analyze proteins. Mixtures of proteins are separated by two properties in two dimensions on 2D gels. 2-DE was first independently introduced by O'Farrell and Klose in 1975.
Basis for separation
2-D electrophoresis begins with electrophoresis in the first dimension and then separates the molecules perpendicularly from the first to create an electropherogram in the second dimension. In electrophoresis in the first dimension, molecules are separated linearly according to their isoelectric point. In the second dimension, the molecules are then separated at 90 degrees from the first electropherogram according to molecular mass. Since it is unlikely that two molecules will be similar in two distinct properties, molecules are more effectively separated in 2-D electrophoresis than in 1-D electrophoresis.
The two dimensions that proteins are separated into using this technique can be isoelectric point, protein complex mass in the native state, or protein mass.
The separation by isoelectric point is called isoelectric focusing. Thereby, a pH gradient is applied to a gel and an electric potential is applied across the gel, making one end more positive than the other. At all pH values other than their isoelectric point, proteins will be charged. If they are positively charged, they will be pulled towards the more negative end of the gel and if they are negatively charged they will be pulled to the more positive end of the gel. The proteins applied in the first dimension will move along the gel and will accumulate at their isoelectric point; that is, the point at which the overall charge on the protein is 0 (a neutral charge).
Separation by protein complex mass is done via native PAGE, in which proteins remain in their native state and are separated in the electric field following their mass and the mass of their complexes respectively. To obtain a separation by size and not by net charge, as in IEF, an additional charge is transferred to the proteins by the use of Coomassie brilliant blue or lithium dodecyl sulfate. Knowledge of protein complex is important for the analysis of the functioning of proteins in a cell, as proteins mostly act together in complexes to be fully functional. The analysis of this sub organelle organisation of the cell requires techniques conserving the native state of the protein complexes.
Separate just by mass is commonly achieved using SDS-PAGE. SDS denatures the proteins, breaks apart most complexes, and approximately equalizes the mass-to-charge ratios. SDS must be done as the second, perpendicular dimension, as it breaks apart complexes (rendering native PAGE impossible) and equalizes mass-to-charge ratios (rendering IEF impossible).
Detecting proteins
The result of this is a gel with proteins spread out on its surface. These proteins can then be detected by a variety of means, but the most commonly used stains are silver and Coomassie brilliant blue staining. In the former case, a silver colloid is applied to the gel. The silver binds to cysteine groups within the protein. The silver is darkened by exposure to ultra-violet light. The amount of silver can be related to the darkness, and therefore the amount of protein at a given location on the gel. This measurement can only give approximate amounts, but is adequate for most purposes. Silver staining is 100x more sensitive than Coomassie brilliant blue with a 40-fold range of linearity.
Molecules other than proteins can be separated by 2D electrophoresis. In supercoiling assays, coiled DNA is separated in the first dimension and denatured by a DNA intercalator (such as ethidium bromide or the less carcinogenic chloroquine) in the second. This is comparable to the combination of native PAGE/SDS-PAGE in protein separation.
Common techniques
IPG-DALT
A common technique is to use an Immobilized pH gradient (IPG) in the first dimension. This technique is referred to as IPG-DALT. The sample is first separated onto IPG gel (which is commercially available) then the gel is cut into slices for each sample which is then equilibrated in SDS-mercaptoethanol and applied to an SDS-PAGE gel for resolution in the second dimension. Typically IPG-DALT is not used for quantification of proteins due to the loss of low molecular weight components during the transfer to the SDS-PAGE gel.
IEF SDS-PAGE
See Isoelectric focusing
2D gel analysis software
In quantitative proteomics, these tools primarily analyze bio-markers by quantifying individual proteins, and showing the separation between one or more protein "spots" on a scanned image of a 2-DE gel. Additionally, these tools match spots between gels of similar samples to show, for example, proteomic differences between early and advanced stages of an illness. Software packages include Delta2D (discontinued), ImageMaster (discontinued), Melanie, PDQuest (discontinued), SameSpots and REDFIN – among others. While this technology is widely utilized, the intelligence has not been perfected. For example, while PDQuest and SameSpots tend to agree on the quantification and analysis of well-defined well-separated protein spots, they deliver different results and analysis tendencies with less-defined less-separated spots. Comparative studies have previously been published to guide researchers on the "best" software for their analysis. Although typically used for standard gel electrophoresis, Sciugo can also be used for figure-creation and quantification.
Challenges for automatic software-based analysis include incompletely separated (overlapping) spots (less-defined or separated), weak spots / noise (e.g., "ghost spots"), running differences between gels (e.g., protein migrates to different positions on different gels), unmatched/undetected spots, leading to missing values, mismatched spots, errors in quantification (several distinct spots may be erroneously detected as a single spot by the software and parts of a spot may be excluded from quantification), and differences in software algorithms and therefore analysis tendencies
Generated picking lists can be exported from some software packages and used for the automated in-gel digestion of protein spots, and subsequent identification of the proteins by mass spectrometry. Mass spectrometry analysis can identify precise mass measurements along with the sequencing of peptides that range from 1000–4000 atomic mass units.
For an overview of the current approach for software analysis of 2DE gel images, see Berth et al. or Bandow et al.
See also
Difference gel electrophoresis
PROTOMAP
References
External links
SameSpots 2-D Proteomics Analysis Software.
JVirGel Create virtual 2-D Gels from sequence data.
Gel IQ A freely downloadable software tool for assessing the quality of 2D gel image analysis data.
2-D Electrophoresis Principles & Methods Handbook
Laboratory techniques
Electrophoresis
es:Electroforesis en gel
zh:双向电泳 | Two-dimensional gel electrophoresis | Chemistry,Biology | 1,484 |
11,009,758 | https://en.wikipedia.org/wiki/Realizability | In mathematical logic, realizability is a collection of methods in proof theory used to study constructive proofs and extract additional information from them. Formulas from a formal theory are "realized" by objects, known as "realizers", in a way that knowledge of the realizer gives knowledge about the truth of the formula. There are many variations of realizability; exactly which class of formulas is studied and which objects are realizers differ from one variation to another.
Realizability can be seen as a formalization of the Brouwer–Heyting–Kolmogorov (BHK) interpretation of intuitionistic logic. In realizability the notion of "proof" (which is left undefined in the BHK interpretation) is replaced with a formal notion of "realizer". Most variants of realizability begin with a theorem that any statement that is provable in the formal system being studied is realizable. The realizer, however, usually gives more information about the formula than a formal proof would directly provide.
Beyond giving insight into intuitionistic provability, realizability can be applied to prove the disjunction and existence properties for intuitionistic theories and to extract programs from proofs, as in proof mining. It is also related to topos theory via realizability topoi.
Example: Kleene's 1945-realizability
Kleene's original version of realizability uses natural numbers as realizers for formulas in Heyting arithmetic. A few pieces of notation are required: first, an ordered pair (n,m) is treated as a single number using a fixed primitive recursive pairing function; second, for each natural number n, φn is the computable function with index n. The following clauses are used to define a relation "n realizes A" between natural numbers n and formulas A in the language of Heyting arithmetic, known as Kleene's 1945-realizability relation:
Any number n realizes an atomic formula s=t if and only if s=t is true. Thus every number realizes a true equation, and no number realizes a false equation.
A pair (n,m) realizes a formula A∧B if and only if n realizes A and m realizes B. Thus a realizer for a conjunction is a pair of realizers for the conjuncts.
A pair (n,m) realizes a formula A∨B if and only if the following hold: n is 0 or 1; and if n is 0 then m realizes A; and if n is 1 then m realizes B. Thus a realizer for a disjunction explicitly picks one of the disjuncts (with n) and provides a realizer for it (with m).
A number n realizes a formula A→B if and only if, for every m that realizes A, φn(m) realizes B. Thus a realizer for an implication corresponds to a computable function that takes any realizer for the hypothesis and produces a realizer for the conclusion.
A pair (n,m) realizes a formula (∃ x)A(x) if and only if m is a realizer for A(n). Thus a realizer for an existential formula produces an explicit witness for the quantifier along with a realizer for the formula instantiated with that witness.
A number n realizes a formula (∀ x)A(x) if and only if, for all m, φn(m) is defined and realizes A(m). Thus a realizer for a universal statement is a computable function that produces, for each m, a realizer for the formula instantiated with m.
With this definition, the following theorem is obtained:
Let A be a sentence of Heyting arithmetic (HA). If HA proves A then there is an n such that n realizes A.
On the other hand, there are classical theorems (even propositional formula schemas) that are realized but which are not provable in HA, a fact first established by Rose. So realizability does not exactly mirror intuitionistic reasoning.
Further analysis of the method can be used to prove that HA has the "disjunction and existence properties":
If HA proves a sentence (∃ x)A(x), then there is an n such that HA proves A(n)
If HA proves a sentence A∨B, then HA proves A or HA proves B.
More such properties are obtained involving Harrop formulas.
Later developments
Kreisel introduced modified realizability, which uses typed lambda calculus as the language of realizers. Modified realizability is one way to show that Markov's principle is not derivable in intuitionistic logic. On the contrary, it allows to constructively justify the principle of independence of premise:
.
Relative realizability is an intuitionist analysis of computable or computably enumerable elements of data structures that are not necessarily computable, such as computable operations on all real numbers when reals can be only approximated on digital computer systems.
Classical realizability was introduced by Krivine and extends realizability to classical logic. It furthermore realizes axioms of Zermelo–Fraenkel set theory. Understood as a generalization of Cohen’s forcing, it was used to provide new models of set theory.
Linear realizability extends realizability techniques to linear logic. The term was coined by Seiller to encompass several constructions, such as geometry of interaction models, ludics, interaction graphs models.
Use in proof mining
Realizability is one of the methods used in proof mining to extract concrete "programs" from seemingly non-constructive mathematical proofs. Program extraction using realizability is implemented in some proof assistants such as Coq.
See also
Curry–Howard correspondence
Dialectica interpretation
Harrop formula
Notes
References
Kreisel G. (1959). "Interpretation of Analysis by Means of Constructive Functionals of Finite Types", in: Constructivity in Mathematics, edited by A. Heyting, North-Holland, pp. 101–128.
Kleene, S. C. (1973). "Realizability: a retrospective survey" from , pp. 95–112.
External links
Realizability Collection of links to recent papers on realizability and related topics.
Proof theory
Constructivism (mathematics) | Realizability | Mathematics | 1,317 |
15,277,793 | https://en.wikipedia.org/wiki/Job%20control%20%28computing%29 | In computing, job control refers to the control of multiple tasks or jobs on a computer system, ensuring that they each have access to adequate resources to perform correctly, that competition for limited resources does not cause a deadlock where two or more jobs are unable to complete, resolving such situations where they do occur, and terminating jobs that, for any reason, are not performing as expected.
Job control has developed from the early days of computers where human operators were responsible for setting up, monitoring and controlling every job, to modern operating systems, which take on the bulk of the work of job control.
Even with a highly sophisticated scheduling system, some human intervention is desirable. Modern systems permit their users to stop and resume jobs, to execute them in the foreground (with the ability to interact with the user) or in the background. Unix-like systems follow this pattern.
History
It became obvious to the early computer developers that their fast machines spent most of the time idle because the single program they were executing had to wait while a slow peripheral device completed an essential operation such as reading or writing data; in modern terms, programs were I/O-bound, not compute-bound. Buffering only provided a partial solution; eventually an output buffer would occupy all available memory or an input buffer would be emptied by the program, and the system would be forced to wait for a relatively slow device to complete an operation.
A more general solution is multitasking. More than one running program, or process, is present in the computer at any given time. If a process is unable to continue, its context can be stored and the computer can start or resume the execution of another process. At first quite unsophisticated and relying on special programming techniques, multitasking soon became automated, and was usually performed by a special process called the scheduler, having the ability to interrupt and resume the execution of other processes. Typically a driver for a peripheral device suspends execution of the current process if the device is unable to complete an operation immediately, and the scheduler places the process on its queue of sleeping jobs. When the peripheral completed the operation the process is re-awakened. Similar suspension and resumption may also apply to inter-process communication, where processes have to communicate with one another in an asynchronous manner but may sometimes have to wait for a reply.
However this low-level scheduling has its drawbacks. A process that seldom needs to interact with peripherals or other processes would simply hog processor resource until it completed or was halted by manual intervention. The result, particularly for interactive systems running tasks that frequently interact with the outside world, is that the system is sluggish and slow to react in a timely manner. This problem is resolved by allocating a "timeslice" to each process, a period of uninterrupted execution after which the scheduler automatically puts it on the sleep queue. Process could be given different priorities, and the scheduler could then allocate varying shares of available execution time to each process on the basis of the assigned priorities.
This system of pre-emptive multitasking forms the basis of most modern job control systems.
Batch processing
While batch processing can run around the clock, with or without computer operators, since the computer is much faster than a person, most decision-making occurs before the job even begins to run, and requires planning by the "programmer."
Batch-oriented features
Although a computer operator may be present, batch processing is intended to mostly operate without human intervention. Therefore, many details must be included in the submitted instructions:
which programs to run;
which files and/or devices to use for input/output;
under which conditions to skip a step.
Job control languages
Batch
Early computer resident monitors and operating systems were relatively primitive and were not capable of sophisticated resource allocation. Typically such allocation decisions were made by the computer operator or the user who submitted a job. Batch processing was common, and interactive computer systems rare and expensive. Job control languages developed as primitive instructions, typically punched on cards at the head of a deck containing input data, requesting resources such as memory allocation, serial numbers or names of magnetic tape spools to be made available during execution, or assignment of filenames or devices to device numbers referenced by the job. A typical example of this kind of language, still in use on mainframes, is IBM's Job Control Language (also known as JCL). Though the format of early JCLs was intended for punched card use, the format survived the transition to storage in computer files on disk.
BANG and other non-IBM JCLs
Non-IBM mainframe batch systems had some form of job control language, whether called that or not; their syntax was completely different from IBM versions, but they usually provided similar capabilities. Interactive systems include "command languages"—command files (such as PCDOS ".bat" files) can be run non-interactively, but these usually do not provide as robust an environment for running unattended jobs as JCL. On some computer systems the job control language and the interactive command language may be different. For example, TSO on z/OS systems uses CLIST or Rexx as command languages along with JCL for batch work. On other systems these may be the same.
The Non-IBM JCL of what at one time was known as the BUNCH (Burroughs, Univac/Unisys, NCR, Control Data, Honeywell), except for Unisys, are part of the BANG that has been quieted.
Interactive
As time sharing systems developed, interactive job control emerged. An end-user in a time sharing system could submit a job interactively from his remote terminal (remote job entry), communicate with the operators to warn them of special requirements, and query the system as to its progress. He could assign a priority to the job, and terminate (kill) it if desired. He could also, naturally, run a job in the foreground, where he would be able to communicate directly with the executing program. During interactive execution he could interrupt the job and let it continue in the background or kill it. This development of interactive computing in a multitasking environment led to the development of the modern shell.
File systems and device independence
The ability to not have to specify part or all of the information about a file or device to be used by a given program is called device independence.
Real-time computing
Pre-emptive multitasking with job control assures that a system operates in a timely manner most of the time. In some environments (for instance, operating expensive or dangerous machinery), a strong design constraint of the system is the delivery of timely results in all circumstances. In such circumstances, job control is more complex and the role of scheduling is more important.
Since real-time systems do event-driven scheduling for all real-time operations, "the sequence of these real-time operations is not under the immediate control of a computer operator or programmer."
However, a system may have the ability to interleave real-time and other, less time-critical tasks, where the dividing line might for example be response required within one tenth of a second. In the case of the Xerox RBM (Real-time/Batch Monitor) systems, for example, two other capabilities existed:
computer operator commands ("unsolicited key-in");
background job streams (batch jobs).
External links
Job Control Basics
See also
Command language
Job Control Language
Job control (Unix)
References
Computing terminology | Job control (computing) | Technology | 1,534 |
30,961,716 | https://en.wikipedia.org/wiki/Hemoglobinometer | A hemoglobinometer or haemoglobinometer (British English) is a medical device used to measure hemoglobin concentration in blood. It can operate by spectrophotometric measurement of hemoglobin concentration. Portable hemoglobinometers provide easy and convenient measurement of hematological variables, especially in areas where clinic laboratories are unavailable.
As per guidelines of National AIDS Control Organisation (NACO) for accurate results & mass screening, analysis using hemoglobinometer is a recommended method used for absorbance measurement of whole blood at Hb/HbO2/Isobestic point, based on microcuvette technology such as HemoCue 301 and Mokshit-Chanda-AM005A.
Devices
See also
Hemocytometer
Cytometry
Glucose meter
Blood chemistry
References
Physiological instruments
Hematology | Hemoglobinometer | Technology,Engineering | 175 |
54,286,982 | https://en.wikipedia.org/wiki/Aspergillus%20pisci | Aspergillus pisci is a species of fungus in the genus Aspergillus.
References
pisci
Fungi described in 2014
Fungus species | Aspergillus pisci | Biology | 32 |
2,463,425 | https://en.wikipedia.org/wiki/Halloysite | Halloysite is an aluminosilicate clay mineral with the empirical formula Al2Si2O5(OH)4. Its main constituents are oxygen (55.78%), silicon (21.76%), aluminium (20.90%), and hydrogen (1.56%). It is a member of the kaolinite group. Halloysite typically forms by hydrothermal alteration of alumino-silicate minerals. It can occur intermixed with dickite, kaolinite, montmorillonite and other clay minerals. X-ray diffraction studies are required for positive identification. It was first described in 1826, and subsequently named after, the Belgian geologist Omalius d'Halloy.
Structure
Halloysite naturally occurs as small cylinders (nanotubes) that have a wall thickness of 10–15 atomic aluminosilicate sheets, an outer diameter of 50–60 nm, an inner diameter of 12–15 nm, and a length of 0.5–10 μm. Their outer surface is mostly composed of SiO2 and the inner surface of Al2O3, and hence those surfaces are oppositely charged. Two common forms are found. When hydrated, the clay exhibits a 1 nm spacing of the layers, and when dehydrated (meta-halloysite), the spacing is 0.7 nm. The cation exchange capacity depends on the amount of hydration, as 2H2O has 5–10 meq/100 g, while 4H2O has 40–50 meq/100g. Endellite is the alternative name for the Al2Si2O5(OH)4·2(H2O) structure.
Owing to the layered structure of the halloysite, it has a large specific surface area, which can reach 117 m2/g.
Formation
The formation of halloysite is due to hydrothermal alteration, and it is often found near carbonate rocks. For example, halloysite samples found in Wagon Wheel Gap, Colorado, United States are suspected to be the weathering product of rhyolite by downward moving waters. In general the formation of clay minerals is highly favoured in tropical and sub-tropical climates due to the immense amounts of water flow. Halloysite has also been found overlaying basaltic rock, showing no gradual changes from rock to mineral formation. Halloysite occurs primarily in recently exposed volcanic-derived soils, but it also forms from primary minerals in tropical soils or pre-glacially weathered materials. Igneous rocks, especially glassy basaltic rocks are more susceptible to weathering and alteration forming halloysite.
Often as is the case with halloysite found in Juab County, Utah, United States the clay is found in close association with goethite and limonite and often interspersed with alunite. Feldspars are also subject to decomposition by water saturated with carbon dioxide. When feldspar occurs near the surface of lava flows, the CO2 concentration is high, and reaction rates are rapid. With increasing depth, the leaching solutions become saturated with silica, aluminium, sodium, and calcium. Once the solutions are depleted of CO2 they precipitate as secondary minerals. The decomposition is dependent on the flow of water. In the case that halloysite is formed from plagioclase it will not pass through intermediate stages.
Locations
A highly refined halloysite is mined, then processed, from a rhyolite occurrence in
Matauri Bay, New Zealand. Annual output of this mine is up to 20,000 tonnes per annum.
One of the largest halloysite deposits in the world is Dunino, near Legnica in Poland. It has reserves estimated at 10 million tons of material. This halloysite is characterized by layered-tubular and platy structure.
The Dragon mine, located in the Tintic district, Eureka, Utah, US deposit contains catalytic quality halloysite. The Dragon Mine Deposit is one of the largest in the United States. The total production throughout 1931–1962 resulted in nearly 750,000 metric tons of extracted halloysite. Pure halloysite classified at 10a and 7a are present.
Applications
Commercial
Uses of the halloysite produced at the Matauri Bay deposit in New Zealand include porcelain and bone china by manufacturers in various countries, particularly in Asia.
Laboratory studies
Halloysite is an efficient adsorbent both for cations and anions. It has also been used as a petroleum cracking catalyst, and Exxon has developed a cracking catalyst based on synthetic halloysite in the 1970s. Owing to its structure, halloysite can be used as filler in either natural or modified forms in nanocomposites. Halloysite nanotube can be intercalated with catalytic metal nanoparticles made of silver, ruthenium, rhodium, platinum or cobalt, thereby serving as a catalyst support.
Halloysite has been evaluated for use in the sorption of CO2 and CH4.
Due to its nanostructure, halloysite is used as the main nanostructured filler in multifunctional mixed matrix membranes (MMMs), opening up new possibilities in the separation of gaseous and liquid mixtures and water purification.
Besides supporting nanoparticles, halloysite nanotubes can also be used as a template to produce round well-dispersed nanoparticles (NPs). For example, bismuth and bismuth subcarbonate NPs with controlled size (~7 nm) were synthesized in water. Importantly, when halloysite was not used, large nanoplates instead of round spheres are obtained.
Halloysite is also used to purify water, e.g. from two azo dyes were removed from aq. solutions. by adsorption on a Polish halloysite from Dunino deposit.
Halloysite have many advantages and reported as a nanocontainer.
Halloysite can also be used to produce porous silicon nanotubes as anode materials for Li-ion batteries through the selective etching of aluminium oxide and thermal reduction.
As a nanofiller in nanocomposite e.g. thermoplastic polyurethane acting on the mechanical, physicochemical and biological properties.
Chemistry and mineralogy
Typical chemical and mineralogical analyses of two commercial grades of halloysite are:
References
Phyllosilicates
Aluminium minerals
Clay minerals group
Monoclinic minerals
Minerals in space group 9
Luminescent minerals
Volcanic soils | Halloysite | Chemistry | 1,353 |
62,210,354 | https://en.wikipedia.org/wiki/IoBT-CRA | The Internet of Battlefield Things Collaborative Research Alliance (IoBT-CRA), also known as the Internet of Battlefield Things Research on Evolving Intelligent Goal-driven Networks (IoBT REIGN), is a collaborative research alliance between government, industry, and university researchers for the purposes of developing a fundamental understanding of a dynamic, goal-driven Internet of Military Things (IoMT) known as the Internet of Battlefield Things (IoBT). It was first established by the U.S. Army Research Laboratory (ARL) to investigate the use of machine intelligence and smart technology on the battlefield, as well as strengthen the collaboration between autonomous agents and human soldiers in combat. An initial grant of $25 million was provided by ARL in October 2017 to fund the first five years of this potential 10-year research program.
The research effort is a collaboration between ARL and Carnegie Mellon University, the University of California, Berkeley, the University of California, Los Angeles, the University of Massachusetts, the University of Southern California, Georgetown University, and SRI International with the University of Illinois at Urbana-Champaign (UIUC) acting as the consortium lead.
Goals
The IoBT-CRA was created as part of the U.S. Army’s long-term plans to keep up with technological advances in commercial industry and better prepare for future electronic warfare against more technologically sophisticated adversaries. In light of this objective, the IoBT-CRA focuses on exploring the capabilities of intelligent battlefield systems and large-scale heterogeneous sensor networks that dynamically evolve in real-time in order to adapt to Army mission needs. Part of the CRA research is dedicated to enhancing modern intelligent sensor and actuator capacity, allowing them to be compatible with secure military-owned networks, less trustworthy civilian networks, and adversarial networks.
ARL identified six areas of research that the IoBT-CRA should strive to develop as part of its program:
Agile Synthesis: Theoretical models and methods of autonomic complex systems that provide the capacity to enable fast and effective command over military, adversary, and civilian networks.
Reflexes: Theoretical models and methods for structuring dynamic IoBTs that perform adaptive, autonomic, and self-aware behavior at varying ranges of scale, distribution, resource constraints, and heterogeneity.
Intelligent Battlefield Services: Scientific theories that will help improve the fundamental run-time capabilities of IoBTs with tasks such as information collection, predictive processing, and data anomaly detection.
Security: Methods of increasing the defenses of IoBTs such that the system is resilient to attacks and tampering from adversaries and is able to continue operating under less-than-ideal situations.
Dependability: Fundamental models related to asset composition, system adaptation, and intelligent services that are all aimed to increase the reliability of IoBTs in largely uncertain environments.
Experimentation: The architectural foundations for the IoBT seek to evaluate how well theories, algorithms, and technologies perform under various military-related scenarios for the purpose of addressing issues regarding scale, composability, and compatibility.
External links
IoBT REIGN Homepage
List of IoBT-CRA publications
U.S. Army Research Laboratory IoBT webpage
References
Military technology
Internet of things
Digital technology | IoBT-CRA | Technology | 662 |
57,244,772 | https://en.wikipedia.org/wiki/Shuntaro%20Furukawa | is a Japanese businessman and executive. He is the sixth and current president of the video game company Nintendo in Japan. He took over as company president in June 2018, succeeding Tatsumi Kimishima.
Early life
Furukawa was born in Tokyo, Japan as the son of illustrator Taku Furukawa on January 10, 1972. He grew up playing games on Nintendo's Famicom console. Furukawa is a graduate of Kunitachi Senior High School, and graduated from Waseda University's School of Political Science and Economics in 1994.
Career
In 1994, he joined Nintendo and worked as an accountant at the European headquarters for a decade. By the mid-2010s, he rose up in the corporate office, working in global marketing, the executive department. He also became an outside director of the partly owned The Pokémon Company as the Nintendo representative in the board of directors of the company, due to Nintendo owning 32% shares in the joint venture.
In 2015, he was promoted to the General Manager of Corporate Planning Department. In June 2016, with some company restructuring he joined the Nintendo Board of Directors as the Managing Executive Officer of the Corporate Analysis & Administration Division. Furukawa is fluent in English, and was involved in the development and release of the Nintendo Switch. On June 28, 2018, he succeeded Tatsumi Kimishima to become the sixth company president in Nintendo's history.
References
Living people
1972 births
Japanese chairpersons of corporations
Japanese chief executives
Japanese video game businesspeople
Nintendo people
Businesspeople from Tokyo
Waseda University alumni
21st-century Japanese businesspeople | Shuntaro Furukawa | Technology | 323 |
3,595,454 | https://en.wikipedia.org/wiki/Vaccine%20Safety%20Datalink | The Vaccine Safety Datalink Project (VSD) was established in 1990 by the United States Centers for Disease Control and Prevention (CDC) to study the adverse effects of vaccines.
Four large health maintenance organizations, including Kaiser Permanente, were initially recruited to provide the CDC with medical data on vaccination histories, health outcomes, and subject characteristics. The VSD database contains data compiled from surveillance on more than seven million people in the United States, including about 500,000 children from birth through age six years (2% of the U.S. population in this age group).
The VSD data-sharing program is now being administered by the National Center for Health Statistics Research Data Center. The data sharing guidelines have been revised to include comments from interested groups as well as recommendations from the Institute of Medicine (IOM).
The Vaccine Adverse Event Reporting System (VAERS), the VSD, and the Clinical Immunization Safety Assessment (CISA) Network are tools by which the CDC and FDA measure vaccine safety to fulfill their duty as regulatory agencies charged with protecting the public. Data from the VSD Project have been used to address a number of vaccine safety concerns; examples include a study clarifying the risk of anaphylaxis after vaccine administration and several studies examining the rejected hypothesis of a link between thimerosal-containing vaccines and autism.
Participating healthcare organizations
The following organizations are members of the project:
Kaiser Permanente Washington, Seattle, Washington
Harvard Pilgrim Health Care, Boston, Massachusetts
HealthPartners Institute, Bloomington, Minnesota
Kaiser Permanente Northwest, Portland, Oregon
Kaiser Permanente Northern California, Oakland, California
Kaiser Permanente Colorado, Denver, Colorado
Denver Health Medical Center, Denver, Colorado
Marshfield Clinic Research Institute, Marshfield, Wisconsin
Kaiser Permanente Southern California, Los Angeles, California
Kaiser Permanente Mid-Atlantic States (Rockville, MD)
Acumen (Burlingame, CA)
Indiana University (Indianapolis, IN)
OCHIN (Portland, OR)
Notes
External links
NationalAcademies.org – 'Independent Oversight of Vaccine Safety Data Program Needed To Ensure Greater Transparency and Enhance Public Trust', National Academies (February 17, 2005)
WHO.int (pdf) – 'The Vaccine Safety Datalink: immunization research in health maintenance organizations in the USA', R.T. Chen, F. DeStefano, R.L. Davis, L.A. Jackson, R.S. Thompson, J.P. Mullooly, S.B. Black, H.R. Shinefield, C.M. Vadheim, J.I. Ward, S.M. Marcy & the Vaccine Safety Datalink Team, World Health Organization
Vaccination-related organizations
Drug safety
Vaccination in the United States
Centers for Disease Control and Prevention | Vaccine Safety Datalink | Chemistry | 571 |
17,545,919 | https://en.wikipedia.org/wiki/Institute%20of%20Combinatorics%20and%20its%20Applications | The Institute of Combinatorics and its Applications (ICA) is an international scientific organization formed in 1990 to increase the visibility and influence of the combinatorial community. In pursuit of this goal, the ICA sponsors conferences, publishes a bulletin and awards a number of medals, including the Euler, Hall, Kirkman, and Stanton Medals. It is based in Duluth, Minnesota and its operation office is housed at University of Minnesota Duluth.
The institute was minimally active between 2010 and 2016 and resumed its full activities in March 2016.
Membership
The ICA has over 800 members in over forty countries. Membership is at three levels. Members are those who have not yet completed a Ph.D. Associate Fellows are younger members who have received the Ph.D. or have published extensively; normally an Associate Fellow should hold the rank of assistant professor. Fellows are expected to be established scholars and typically have the rank of associate professor or higher.
Some members are involved in highly theoretical research; there are members whose primary interest lies in education and instruction; and there are members who are heavily involved in the applications of combinatorics in statistical design, communications theory, cryptography, computer security, and other practical areas.
Although being a fellow of the ICA is not itself a highly selective honor, the ICA also maintains another class of members, "honorary fellows", people who have made "pre-eminent contributions to combinatorics or its applications". The number of living honorary fellows is limited to ten at any time. The deceased honorary fellows include
H. S. M. Coxeter, Paul Erdős, Haim Hanani, Bernhard Neumann, D. H. Lehmer,
Leonard Carlitz, Robert Frucht, E. M. Wright, and Horst Sachs.
Living honorary fellows include
S. S. Shrikhande, C. R. Rao, G. J. Simmons, Simmons and Sos are no longer alive, same for Shrikhande afaik Vera Sós, Henry Gould, Carsten Thomassen, Neil Robertson, Cheryl Praeger, and R. M. Wilson.
Publication
The ICA publishes the Bulletin of the ICA (), a journal that combines publication of survey and research papers with news of members and accounts of future and past conferences. It appears three times a year, in January, May and September and usually consists of 128 pages.
Beginning in 2017, the research articles in the Bulletin have been made available on an open access basis.
Medals
The ICA awards the Euler Medals annually for distinguished career contributions to combinatorics by a member of the institute who is still active in research. It is named after the 18th century mathematician Leonhard Euler.
The ICA awards the Hall Medals, named after Marshall Hall, Jr., to recognize outstanding achievements by members who are not over age 40.
The ICA awards the Kirkman Medals, named after Thomas Kirkman, to recognize outstanding achievements by members who are within four years past their Ph.D.
The winners of the medals for the years between 2010 and 2015 were decided by the ICA Medals Committee between November 2016 and February 2017 after the ICA resumed its activities in 2016.
In 2016, the ICA voted to institute an ICA medal to be known as the Stanton Medal, named after Ralph Stanton, in recognition of substantial and sustained contributions, other than research, to promoting the discipline of combinatorics. The Stanton Medal honours significant lifetime contributions to promoting the discipline of combinatorics through advocacy, outreach, service, teaching and/or mentoring. At most one medal per year is to be awarded, typically to a Fellow of the ICA.
List of Euler Medal winners
List of Hall Medal winners
List of Kirkman Medal winners
List of Stanton Medal winners
References
External links
Official Website
Mathematical societies
Organizations established in 1990
Organizations based in Winnipeg
Mathematics awards | Institute of Combinatorics and its Applications | Technology | 779 |
21,462,430 | https://en.wikipedia.org/wiki/Hydraulic%20splitter | A hydraulic splitter, also known as rock splitter or darda splitter, is a type of portable hydraulic tool. It is used in demolition jobs which involve breaking large blocks of concrete or rocks. Its use in geology was first popularized by volcanologist David Richardson.
Following the darda splitters, the second type hydraulic splitter, known as piston splitter began to be used in large rock demolition sites like tunneling sites or building foundation sites. This type of piston splitter produces much stronger splitting forces than darda splitters. The piston splitter requires a larger hole size diameters (usually 90mm, 95mm, 105mm and rarely 150mm or 200mm) than the darda splitter, which requires holes usually under 50mm. The cylinder diameters of the piston splitters are smaller than the holes by 10~15mm in diameter. Hwacheon HRD-tech introduced this piston splitter in late 1990 for industrial application and improved its minute details in Korea. Many others began to manufacture them as the demand rose.
The darda splitters have been manufactured by a German company, Darda and by many other manufacturers. Large size darda splitters mounted on excavator is manufactured by Yamamoto Rock international and Splitstone.
Splitstone manufactures both portable and larger splitters.
The darda splitters consist of two wedges which are inserted in a pre-drilled hole and a hydraulic cylinder driven by a hydraulic power pack.
The piston splitter consists of one hydraulic power pack and one or more cylinders which has(have) one or multiple pistons on cylinder body and connecting hoses between the power pack and the cylinders.
Piston splitters have been used for demolition of rocks in building foundation, tunnels, shaft digging, trench work, quarrying and zoo areas. Large size piston splitters are mounted on an excavator for more efficient demolition.
The splitting performance is very efficient in comparison with the predrilled hole-making. As the spacing between hole is near to front and side, the number of holes required are quite many. With manual type splitter, the wedge type splitter requires 20~25 cm space, while piston type splitter requires 40~50 cm.
Larger size wedge splitter requires large spacing as much as 50~70 cm. Large size piston splitter requires 60~100 cm spacing . Large size wedge splitter or
piston splitter is mounted on a vehicle like excavator.
More and more strict environmental regulation( noise or vibration, dust, flying rock) increases the demand for hydraulic rock ( or concrete) splitting across the world.
External links
Demolition
Hydraulic tools | Hydraulic splitter | Physics,Engineering | 536 |
69,796,600 | https://en.wikipedia.org/wiki/Curium%28III%29%20chloride | Curium(III) chloride is the chemical compound with the formula CmCl3.
Structure
Curium(III) chloride has a 9 coordinate tricapped trigonal prismatic geometry.
Synthesis
Curium(III) chloride can be obtained from the reaction of hydrogen chloride gas with curium dioxide, curium(III) oxide, or curium(III) oxychloride at a temperature of 400-600 °C:
It can also be obtained from the dissolution of metallic curium in dilute hydrochloric acid:
This method has a number of disadvantages associated with the ongoing processes of hydrolysis and hydration of the resulting compound in an aqueous solution, making it problematic to obtain a pure product using this reaction.
It can be obtained from the reaction of curium nitride with cadmium chloride:
References
Curium compounds
Nuclear materials
Chlorides
Actinide halides | Curium(III) chloride | Physics,Chemistry | 181 |
2,322,219 | https://en.wikipedia.org/wiki/Epidemic%20dropsy | Epidemic dropsy is a form of edema of extremities due to poisoning by Argemone mexicana (Mexican prickly poppy).
Epidemic dropsy is a clinical state resulting from use of edible oils adulterated with Argemone mexicana seed oil.
Sanguinarine and dihydrosanguinarine are two major toxic alkaloids of argemone oil, which cause widespread capillary dilatation, proliferation and increased capillary permeability. When mustard oil is adulterated deliberately (as in most cases) or accidentally with argemone oil, proteinuria (specifically loss of albumin) occurs, with a resultant edema as would occur in nephrotic syndrome.
Other major symptoms are bilateral pitting edema of extremities, headache, nausea, loose bowels, erythema, glaucoma and breathlessness.
Leakage of the protein-rich plasma component into the extracellular compartment leads to the formation of edema. The haemodynamic consequences of this vascular dilatation and permeability lead to a state of relative hypovolemia with a constant stimulus for fluid and salt conservation by the kidneys. Illness begins with gastroenteric symptoms followed by cutaneous erythema and pigmentation. Respiratory symptoms such as cough, shortness of breath and orthopnoea, progressing to frank right-sided congestive cardiac failure, are seen.
Mild to moderate anaemia, hypoproteinaemia, mild to moderate renal azotemia, retinal haemorrhages, and glaucoma are common manifestations. There is no specific therapy. Removal of the adulterated oil and symptomatic treatment of congestive cardiac failure and respiratory symptoms, along with administration of antioxidants and multivitamins, remain the mainstay of treatment.
Epidemic dropsy occurs as an epidemic in places where use of mustard oil from the seeds of Brassica juncea, commonly known as Indian mustard, as a cooking medium is common. This is because there is an increased chance of adulteration (with argemone oil) and consumption of such adulterated mustard oil in these areas.
Signs and symptoms
Dropsy patients develop proteinuria specifically due to loss of albumin, with a resultant edema as would occur in nephrotic syndrome.
Major symptoms observed in patiesnts are bilateral pitting edema of extremities, headache, nausea, loose bowels, erythema, glaucoma and breathlessness.
Illness begins with gastroenteric symptoms followed by cutaneous erythema and pigmentation. Respiratory symptoms such as cough, shortness of breath and orthopnoea. In severe cases the before mentioned conditions will progress to frank right-sided congestive cardiac failure and death of the patient.
Cause
Argemone mexicana
Argemone mexicana (family Papaveraceae), a native of West Indies and naturalized in India, is known as “Shailkanta” in Bengal and “Bharbhanda” in Uttar Pradesh. It is also popularly known as “Pivladhatura” or “Satyanashi”, meaning devastating. The plant grows wildly in mustard and other fields. Its seeds are black in colour and are similar to the dark coloured mustards seeds (Brassica juncea) in shape and size. Adulteration of argemone seeds in light yellow colored mustard seeds (Brassica compestris) can easily be detected, but these seeds are rather difficult to visualize when mixed with dark coloured mustard seeds.
Argemone seeds yield approximately 35% oil. Alkaloid content in argemone oil varies from 0.44% to 0.50%. Argemone seeds find use as a substitute because of the easy availability, low cost and their complete miscibility of their oil with mustard oil.
Mechanism
Mortality is usually due to heart failure, pneumonia, respiratory distress syndrome or renal failure and is around 5%. Long-term follow-up studies are scanty so the long-term effects of argemone oil toxicity have not been documented. It has been reported that 25% of cases will have edema beyond 2 months and 10% beyond 5 months. Pigmentation of skin and excessive loss of hair, which lasted 4–5 months following the disease. The majority of patients completely recover in about 3 months.
Reactive oxygen species and oxidative stress: Studies of the blood of dropsy patients has revealed that there is extensive reactive oxygen species (ROS) production (singlet oxygen and hydrogen peroxide) in the argemone oil intoxication leading to depletion of total antioxidants in the body and especially lipid-soluble antioxidants such as vitamin E and A (tocopherol and retinol). There is an extensive damage to the anti-oxidant (AO) defense system (anti-oxidant enzymes and anti-oxidants) of blood. Prior in vitro studies have shown that ROS are involved in AO induced toxicity causing peroxidative damage of lipids in various hepatic sub-cellular fractions including microsomes and mitochondria of rats. The damage in hepatic microsomal membrane causes loss of activity of cytochrome P-450 and other membrane bound enzymes responsible for xenobiotic metabolism which leads to delayed bioelimination of sanguinarine and enhances its cumulative toxicity. Several lines of evidence have been shown to explain the mechanism of toxicity of argemone oil/alkaloid. The toxicity of sanguinarine has been shown to be dependent on the reactivity of its iminium bond with nucleophilic sites like thiol groups, present at the active sites of the enzymes and other vital proteins and thus suggesting the electrophilic nature of the alkaloid.
Pulmonary toxicity: The decrease in glycogen levels following argemone oil intoxication could be due to enhanced glycogenolysis leading to the formation of glucose-1-phosphate, which enters the glycolytic pathway resulting in accumulation of pyruvate in the blood of experimental animals and dropsy patients. The enhancement of glycogenolysis can further be supported by the interference of sanguinarine in the uptake of glucose through blocking of sodium pump via Na+-K+-ATPase and thereby inhibiting the active transport of glucose across the intestinal barrier. It is well established that increased pyruvate concentration in blood uncouples oxidative phosphorylation, and this may be responsible for thickening of interalveolar septa and disorganized alveolar spaces in lungs of argemone oil-fed rats and the breathlessness as has been observed in human victims.
Cardiac failure: The inhibition of Na+-K+-ATPase activity of heart by sanguinarine is due to interaction with the cardiac glycoside receptor site of the enzyme, which may be responsible for producing degenerative changes in cardiac muscle fibers in the auricular wall of rats fed argemone oil and could be related to tachycardia and cardiac failure in epidemic dropsy patients.
Delayed clearance: Destruction of hepatic cytochrome P450 significantly affects the metabolic clearance by liver. The retention of sanguinarine in the gastrointestinal (GI) tract, liver, lung, kidney, heart and serum even after 96 hrs of exposure indicates these as the likely target sites of argemone oil toxicity.
Diagnosis
Nitric acid test and paper chromatography test are used in the detection of argemone oil. The paper chromatography test is the most sensitive test.
Treatment
Withdrawal of the contaminated cooking oil is the most important initial step. Bed rest with leg elevation and a protein-rich diet are useful. Supplements of calcium, antioxidants (vitamin C and E), and thiamine and other B vitamins are commonly used. Corticosteroids and antihistamines such as promethazine have been advocated by some investigators, but demonstrated efficacy is lacking. Diuretics are used universally but caution must be exercised not to deplete the intravascular volume unless features of frank congestive cardiac failure are present, as edema is mainly due to increased capillary permeability. Cardiac failure is managed by bed rest, salt restriction, digitalis and diuretics. Pneumonia is treated with appropriate antibiotics. Renal failure may need dialysis therapy and complete clinical recovery is seen. Glaucoma may need operative intervention, but generally responds to medical management.
Prevalence
Besides India, widespread epidemics have been reported from Mauritius, Fiji Islands, Northwest Cape districts of South Africa, Madagascar and also from Nepal. Apart from a South African study, where the epidemic occurred through contamination in wheat flour, all the epidemics occurred through the consumption of mustard oil contaminated with argemone oil.
In these cultural populations mustard oil is the prime edible oil.
The earliest reference to argemone oil poisoning was made by Lyon, who reported four cases of poisoning in Calcutta in 1877 from the use of this oil in food.
Since then, epidemic dropsy has been reported from Bengal, Bihar, Orissa, Madhya Pradesh, Haryana, Assam, Jammu and Kashmir, Uttar Pradesh, Gujarat, Delhi and Maharashtra, mainly due to consumption of food cooked in argemone oil mixed with mustard oil or occasionally by body massage with contaminated oil.
The epidemic in 1998 at New Delhi, India is the largest so far, in which over 60 persons lost their lives and more than 3000 victims were hospitalized. Few studies reported the findings in patients affected with this condition.
Even after that the epidemics occurred at alarming frequency in Gwalior (2000), Kannauj (2002) and Lucknow (2005) cities of India. 6 possible cases with 2 deaths were reported in Gundari village in Banaskantha district of Gujarat in India were reported in June 2021.
References
External links
Toxicology
Vascular-related cutaneous conditions
Toxic effect of noxious substances eaten as food | Epidemic dropsy | Environmental_science | 2,102 |
17,420,797 | https://en.wikipedia.org/wiki/USA-201 | USA-201, also known as GPS IIR-19(M), GPS IIRM-6 and GPS SVN-48, is an American navigation satellite which forms part of the Global Positioning System. It was the sixth of eight Block IIRM satellites to be launched, and the nineteenth of twenty one Block IIR satellites overall. It was built by Lockheed Martin, using the AS-4000 satellite bus.
USA-201 was launched at 06:10 UTC on 15 March 2008, atop a Delta II carrier rocket, flight number D332, flying in the 7925-9.5 configuration. The launch took place from Space Launch Complex 17A at the Cape Canaveral Air Force Station, and placed USA-201 into a transfer orbit. The satellite raised itself into medium Earth orbit using a Star-37FM apogee motor.
By 18 May 2008, USA-201 was in an orbit with a perigee of , an apogee of , a period of 717.98 minutes, and 55.1 degrees of inclination to the equator. It is used to broadcast the PRN 07 signal, and operates in slot 4 of plane A of the GPS constellation. The satellite has a design life of 10 years and a mass of . As of 2012 it remains in service.
References
Spacecraft launched in 2008
GPS satellites
USA satellites | USA-201 | Technology | 273 |
26,078,570 | https://en.wikipedia.org/wiki/WinPenPack | winPenPack (often shortened to wPP) is an open-source software application suite for Windows. It is a collection of open source applications that have been modified to be executed directly from a USB flash drive (or any other removable storage device) without prior installation. WinPenPack programs are distributed as free software, and can be downloaded individually or grouped into suites.
History
The creator, Danilo Leggieri, put the site winPenPack.com online on 23 November 2005. The project and the associated community then grew quickly. Since that date, 15 new versions and hundreds of open-source portable applications were released. The project is well known in Italy and abroad. It is hosted on SourceForge. The collections are regularly distributed bundled with popular PC magazines in Italy and worldwide. A thriving community of users is actively contributing to the growth of the project. The site currently hosts various projects created and suggested by forum members, and is also used for bug reporting and suggestions.
Press coverage
Since May 2006, winPenPack has been covered by most major Italian PC publications including: PC Professionale, Win Magazine, Computer Magazine, Total Computer, Internet Genius, Quale Computer, Computer Week, and many others.
Features
Portable software
All the applications available in the winPenPack suites are portable applications.
Portable applications:
do not require installation
can be executed from any USB flash drive, and from any PC hard disk drive (internal or external)
leave no traces of their use in the Windows applications registry or any other user folder in the host PC hard drive
do not conflict with the programs installed in the host PC hard drive (for example, X-Firefox executed from a USB flash drive does not modify (or conflict with) the counterpart Firefox program installed on the host PC)
X-Software
X-Software is software that has been modified with X-Launcher to be executed as if it were a portable application. X-Launcher is a specific application which executes other applications in "portable mode" by means of recreating their original operating environment. A few examples of X-Software include X-Firefox (counterpart to Mozilla Firefox), X-Thunderbird (Mozilla Thunderbird), X-Gimp (GIMP), and others.
Main menu functions
The winPenPack main menu can be executed from any removable storage device (including, and especially, from USB flash drives). In each different winPenPack suite, the main menu is pre-configured to list all programs available (including programs belonging to other suites), and can be edited at any time. New programs can be added to the menu either manually (by means of the "Add" options or by drag-and-dropping them onto the menu) or automatically (please note that automatic installation is only available for X-Software, as opposed to portable applications).
Notes
External links
Application launchers
Computing websites
Free software distributions
Portable software suites
Portable software | WinPenPack | Technology | 602 |
43,221 | https://en.wikipedia.org/wiki/E%20number | E numbers, short for Europe numbers, are codes for substances used as food additives, including those found naturally in many foods, such as vitamin C, for use within the European Union (EU) and European Free Trade Association (EFTA). Commonly found on food labels, their safety assessment and approval are the responsibility of the European Food Safety Authority (EFSA). The fact that an additive has an E number implies that its use was at one time permitted in products for sale in the European Single Market; some of these additives are no longer allowed today.
Having a single unified list for food additives was first agreed upon in 1962 with food colouring. In 1964, the directives for preservatives were added, in 1970 antioxidants were added, in 1974 emulsifiers, stabilisers, thickeners and gelling agents were added as well.
Numbering schemes
The numbering scheme follows that of the International Numbering System (INS) as determined by the Codex Alimentarius committee, though only a subset of the INS additives are approved for use in the European Union as food additives. Outside the European continent plus Russia, E numbers are also encountered on food labelling in other jurisdictions, including the Cooperation Council for the Arab States of the Gulf, South Africa, Australia, New Zealand, Malaysia, Hong Kong, and India.
Colloquial use
In some European countries, the "E number" is used informally as a derogatory term for artificial food additives. For example, in the UK, food companies are required to include the "E number(s)" in the ingredients that are added as part of the manufacturing process. Many components of naturally occurring healthy foods and vitamins have assigned E numbers (and the number is a synonym for the chemical component), e.g. vitamin C (E300) and lycopene (E160d), found in carrots. At the same time, "E number" is sometimes misunderstood to imply approval for safe consumption. This is not necessarily the case, e.g. Avoparcin (E715) is an antibiotic once used in animal feed, but is no longer permitted in the EU, and has never been permitted for human consumption. Sodium nitrite (E250) is toxic. Sulfuric acid (E513) is caustic.
Classification by numeric range
Not all examples of a class fall into the given numeric range; moreover, certain chemicals (particularly in the E400–499 range) have a variety of purposes.
Full list
The list shows all components that have an E-number assigned, even those no longer allowed in the EU.
E100–E199 (colours)
E200–E299 (preservatives)
E300–E399 (antioxidants, acidity regulators)
E400–E499 (thickeners, stabilisers, emulsifiers)
E500–E599 (acidity regulators, anti-caking agents)
E600–E699 (flavour enhancers)
E700–E799 (antibiotics)
E900–E999 (glazing agents, gases and sweeteners)
E1000–E1599 (additional additives)
See also
Food Chemicals Codex
List of food additives
International Numbering System for Food Additives
Clean label
References
External links
CODEXALIMENTARIUS FAO-WHO, the international foods standards, established by the Food and Agriculture Organization (FAO) and the World Health Organization (WHO) in 1963
See also their document "Class Names and the International Numbering System for Food Additives" (Ref: CAC/GL #36 publ. in 1989, Revised in 2008, Amended in 2018, 2019, 2021)
Joint FAO/WHO Expert Committee on Food Additives (JECFA) publications at the World Health Organization (WHO)
Food Additive Index, JECFA, Food and Agriculture Organization (FAO)
E-codes and ingredients search engine with details/suggestions for Muslims
Databases of EU-approved food additives and flavoring substances
Food Additives in the European Union
The Food Additives and Ingredients Association, FAIA website, UK.
Chemical numbering schemes
Chemistry-related lists
Food additives
European Union food law
1962 introductions
1962 neologisms
Number-related lists | E number | Chemistry,Mathematics | 899 |
46,180,241 | https://en.wikipedia.org/wiki/Synthetic%20Reaction%20Updates | Synthetic Reaction Updates was a current awareness bibliographic database from the Royal Society of Chemistry that provided alerts of recently published developments in synthetic organic chemistry.
It covered primary research in general and organic chemistry published in chemistry journals. Each record contains a reaction scheme, as well as bibliographic data and a link to the original article on the publisher's website. Subscribers were able to search by topic and reaction type or register for email alerts of new content based on their search preferences.
History
The database was established in 2015 to replace the two discontinued databases Methods in Organic Synthesis () and Catalysts and Catalysed Reactions (.
Methods in Organic Synthesis was an online database that was established in 1998 and updated weekly with the latest developments in organic synthesis. It was also available as a monthly print bulletin.
Catalysts & Catalysed Reactions was a monthly current-awareness journal that was published from 2002 to 2014. It covered the research areas of catalysed reactions and catalysts.
References
External links
Methods in Organic Synthesis
Catalysts & Catalysed Reactions
Chemical synthesis
Royal Society of Chemistry
Bibliographic databases and indexes
2015 establishments in the United Kingdom | Synthetic Reaction Updates | Chemistry | 234 |
67,592,206 | https://en.wikipedia.org/wiki/Reverse%20waterfall | Reverse waterfall is a phenomenon in which water is blown upward due to strong wind in waterfalls giving an apparent perception of water flowing upwards. Strong blowing of wind above about 75 km/h can cause such phenomena.
List of observed location
These have been observed in Australia, India, Japan, the UK, the US and various parts of the world where there is chance of strong wind such as:
Australia: A wind of 70 km/h caused reverse waterfalls in various location in Sydney, Central Coast, Mid North Coast, Hunter, Illawarra areas and in the Royal National Park.
India:
A waterfall at Naneghat in Malshej Ghat Road near Mumbai
Samrad village in the Sandhan Valley has waterfalls showing reverse waterfall during monsoon.
Amboli hills near Belgaum have various waterfalls that becomes active in monsoon which gets blown upward due to strong wind.
Japan: Shiretoko National Park in Japan has Furepe Falls to the Sea of Okhotosk. This fall also gets reversed during strong wind.
Brazil: In the Chapada Diamantina National Park the Cachoeira da Fumaça (Smoke Waterfall) shows the phenomenon.
Chile: The waterfall in Talca shows the phenomenon.
United Kingdom: Has been observed in the Peak District amongst other highland areas, commonly in autumn and winter when strong winds can occur. The Kinder Downfall waterfall in the Kinder Scout area of the Peak District regularly exhibits this phenomenon.
"Ireland": at the Cliffs of Moher, during storm Ciaran (November, 2023).
United States:
Observed on a cliff in Ivins, Utah, Utah on 16 January 2023. Winds created updrafts strong enough to spray the waterfall upwards to the plateau.
The Waipuhia Falls in Oahu, Hawaii gets reversed due to north-easterly trade winds.
References
Earth phenomena
Waterfalls | Reverse waterfall | Physics | 376 |
72,572,231 | https://en.wikipedia.org/wiki/Equinoctial%20hours | An equinoctial hour is one of the 24 equal parts of the full day (which includes daytime and nighttime).
Its length, unlike the temporal hour, does not vary with the season, but is constant. The measurement of the full day with equinoctial hours of equal length was first used about 2,400 years ago in Babylonia to make astronomical observations comparable regardless of the season. Our present hour is an equinoctial hour, freed only from its seasonal variation and from the small error due to some uniform Earth rotation, and realized by modern technical means (atomic clock, satellite and VLBI-Astrometry).
When the temporal hour was used, the daytime and nighttime, whose lengths vary greatly throughout the year, were each divided into 12 hours. This corresponded to the earlier sentiment and custom of not grouping the night with the daytime.
The name equinoctial hours refers to the fact that the temporal hours of the daytime (daylight hours) and those of the night are of equal length at each of the equinoxes.
History
Equinoctial hours () are found, in distinction to the , the 'unequal' hours, at least in Ancient Greece.
Geminos of Rhodes reported the observation of Pytheas of Massalia that the duration of the night depended on the geographical latitude of the place in question. However, it is not clear from his explanations whether he meant equal or equinoctial hours. Otto Neugebauer cites this account as the oldest testimony to the concept of hour (¹ra) as a defined measure of time.
The Babylonian calendar knew no division of the day into 24 time units, so Ancient Egyptian influence for this system can be considered probable. The period of its origin can be dated to the 4th century BC, since Pytheas of Massalia refers to the terminus G¨j perÐodoj introduced by Eudoxus of Cnidus.
The use of equinoctial hours was already familiar in the work of Hipparchus of Nicaea. In the appendix to his commentary on Aratos of Soloi and Eudoxos of Knidos, he uses the well-known 24-hour circles and names stars whose rises are separated from each other by about one equinoctial hour in certain seasons.
With the invention of the Stroke clock, for the first time one could read equinoctial hours mechanically without having to perform astronomical calculations. A mechanical clock displaying the previously used temporal hours would be very costly, but occasionally its construction was nevertheless attempted. Equinoctial hours are first attested in conjunction with striking clocks in Padua in 1344, in Genoa in 1353, and in Bologna in 1356. Subsequently, striking clocks came into use throughout Europe.
Equal hours in ancient Egypt
In Ancient Egypt, the earliest use of equal hours is attested by an inscription from the time of Amenophis I around 1525 BC. The use of water clocks allowed individual units of hours; for example, for the division of Decan star intervals, where fractions of hours were also taken into account.
Ten equivalent hours were used for the time between two sunrises.
Equal hours in Babylonia
The temporal hour was unknown to the Babylonians until the third century BC. However, attempts have been made to establish a second ideal calendar with seasonal hours alongside the astronomical system of equivalent hours. Bartel Leendert van der Waerden analyzed the "Babylonian system of the ideal calendar" in 1974:
Neugebauer reiterated this finding in 1975 as an important feature which distinguishes it from the later Greek temporal hours. The durations of the daytime and nighttime were measured by Babylonian astronomers with a gnomon and a water clock further in BERU as well as UŠ. The time periods were divided into equivalent time units with respect to celestial observation. The use of a gnomon together with a water clock is already documented in the MUL.APIN-cuneiform tablets around 700 BC.
From their contents it is clear that the values for the duration of the light day and night were recorded during four colures aligned with the longest and shortest days of the year. The records have gnomon tables, but they are preserved only for specific dates in the Hebrew calendar: the 15th of Nisan and the 15th of Tammuz. The tables for the 15th Tishrei and the 15th Tevet were at the beginning of the broken away second column. The gnomon tables are written in the form that the length of the gnomon corresponds to a Mesopotamian cubit, which measured between 40 and 50 cm.
A 24-hour day contained twelve Dannas, which in turn, taking into account the Babylonian model of the mean sun, comprised twelve equinoctial units, each lasting 120 minutes The equivalent hours had the Sumerian System of the distance covered on foot in broad daylight as a basis. The unit of measurement, which has a distance of about 10 km as a computational value, is also erroneously called "double hour" in modern literature.
See also
Epic of Gilgamesh
Hour
Literature
Friedrich Karl Ginzel: Handbuch der mathematischen und technischen Chronologie, Vol. 1 - Zeitrechnung der Babylonier, Ägypter, Mohammedaner, Perser, Inder, Südostasiaten, Chinesen, Japaner und Zentralamerikaner -, Deutsche Buch-Ex- und Import, Leipzig 1958 (Reprint Leipzig 1906)
Richard Anthony Parker: Egyptian Astronomy, Astrology and calendrical reckoning In: Charles-Coulson Gillispie: Dictionary of scientific Biography - American Council of Learned Societies - Vol. 15, Supplement 1 (Roger Adams, Ludwik Zejszner: Topical essays), Scribner, New York 1978, ISBN 0-684-14779-3, pp. 706–727.
François Thureau-Dangin: Itanerare - Babylonische Doppelstunde -. In: Dietz Otto Edzard: Reallexikon der Assyriologie und vorderasiatischen Archäologie. Vol. 5: Ia to Kizzuwatna, de Gruyter, Berlin 1980, ISBN 3-11-007192-4, p. 218.
François Thureau-Dangin: Rituels Accadiens Leroux, Paris 1921, p. 133.
Wolfgang Fels: Marcus Manilus: Astronomica - (Latin–German. published by Reclam, Stuttgart 1990, ISBN 3-15-008634-5.
Friedrich-Karl Ginzel: Handbuch der mathematischen und technischen Chronologie II - Das Zeitrechnungswesen der Völker: Zeitrechnung der Juden, der Naturvölker, der Römer und Griechen sowie Nachträge zum 1. Bande. Deutscher Buch-Ex- und Import, Leipzig 1958 (Reprint of first edition Leipzig 1911).
Otto Neugebauer: A history of ancient mathematical astronomy. Studies in the history of mathematics and physical sciences, Vols. 1–3. Springer, Berlin 2006, ISBN 3-540-06995-X (Reprint of 1975 Berlin edition).
References
External links
Die Aequinoctialstunden (German language site)
Timekeeping
Babylonia
Sumer
History of timekeeping
Equinoxes | Equinoctial hours | Physics,Astronomy | 1,531 |
31,360,541 | https://en.wikipedia.org/wiki/Suction%20caisson | Suction caissons (also referred to as suction anchors, suction piles or suction buckets) are a form of fixed platform anchor in the form of an open bottomed tube embedded in the sediment and sealed at the top while in use so that lifting forces generate a pressure differential that holds the caisson down. They have a number of advantages over conventional offshore foundations, mainly being quicker to install than deep foundation piles and being easier to remove during decommissioning. Suction caissons are now used extensively worldwide for anchoring large offshore installations, like oil platforms, offshore drillings and accommodation platforms to the seafloor at great depths. In recent years, suction caissons have also seen usage for offshore wind turbines in shallower waters.
Oil and gas recovery at great depth could have been a very difficult task without the suction anchor technology, which was developed and used for the first time in the North Sea 30 years ago.
The use of suction caissons/anchors has now become common practice worldwide. Statistics from 2002 revealed that 485 suction caissons had been installed in more than 50 different localities around the world, in depths to about 2000 m. Suction caissons have been installed in most of the deep water oil producing areas around the world: The North Sea, Gulf of Mexico, offshore West Africa, offshore Brazil, West of Shetland, South China Sea, Adriatic Sea and Timor Sea. No reliable statistics have been produced after 2002, but the use of suction caissons is still rising.
Description
A suction caisson can effectively be described as an inverted bucket that is embedded in the marine sediment. Attachment to the sea bed is achieved either through pushing or by creating a negative pressure inside the caisson skirt by pumping water out of the caisson; both of these techniques have the effect of securing the caisson into the sea bed. The foundation can also be rapidly removed by reversing the installation process, pumping water into the caisson to create an overpressure.
The concept of suction technology was developed for projects where gravity loading is not sufficient for pressing foundation skirts into the ground. The technology was also developed for anchors subject to large tension forces due to waves and stormy weather. The suction caisson technology functions very well in a seabed with soft clays or other low strength sediments. The suction caissons are in many cases easier to install than piles, which must be driven (hammered) into the ground with a pile driver.
Mooring lines are usually attached to the side of the suction caisson at the optimal load attachment point, which must be calculated for each caisson. Once installed, the caisson acts much like a short rigid pile and is capable of resisting both lateral and axial loads. Limit equilibrium methods or 3D finite element analysis are used to calculate the holding capacity.
History
Suction caissons were first used as anchors for floating structures in the offshore oil and gas industry, including offshore platforms such as the Draupner E oil rig.
There are great differences between the first small suction caissons that were installed for Shell at the Gorm field in the North Sea in 1981 and the large suction caissons that were installed for the Diana platform in the Gulf of Mexico in 1999. The twelve suction caissons on the Gorm field were intended to secure a simple loading buoy device at a depth of 40 metres, while the installation of suction anchors for the Diana platform was a world record in itself at that time, concerning water depth and size of anchors. The height of the Diana suction caissons is 30 metres, their diameter 6.5 metres, and they were installed at a depth of about 1500 m on soft clay deposits. Since then, suction caissons have been installed at even larger depths, but the Diana installation was a technology breakthrough for the 20th century.
An important development step for the suction caisson technology emerged from cooperation between the former operator in the North Sea, Saga Petroleum AS, and Norwegian Geotechnical Institute (NGI). Saga Petroleum's oil-producing Snorre A platform was a tension-leg platform of a type that in other parts of the world would have been founded with up to 90 metres long piles. Unfortunately on the Snorre oil field, it was difficult to use long piles due to the presence of huge pebbles at 60 m depth under the seabed. Saga Petroleum decided therefore to use suction caissons, which were analysed by NGI. These analyses were verified from extensive model tests. The calculations showed that the platform could be safely secured by suction caissons of only 12 m in length. Snorre A started to produce oil in 1992 and is now operated by the Norwegian oil company Statoil.
Suction buckets were tested with offshore wind turbines at Frederikshavn in 2002, at Horns Rev in 2008 and Borkum Riffgrund in 2014, and are to be used in a third of the foundations at the initial development at Hornsea Wind Farm.
Statoil have gone on to use the technology for windfarms.
They are also planned to be used for some of the wind turbines in the Hornsea Project One wind farm scheduled to be completed in 2020. Similarly, a suction bucket contract has been awarded for the Aberdeen Bay Wind Farm.
Gravity oil platforms
Suction caissons have a lot of similarities with foundation design principles and solutions for the big gravity oil platforms that were installed in the North Sea when the offshore oil production started there in the beginning of 1970's. The first gravity oil platform on the Ekofisk oil field had a foundation area that was as big as a football field, and it was placed on a seabed with very dense sand. The platform was designed to tolerate waves up to 24 m in height.
As the installation of oil platforms continued in the North Sea, in areas with poor ground conditions such as soft clays, they were designed to survive even higher storm waves. These platforms were founded on a system of cylindrical skirts that were penetrated into the ground under combined gravity load and underpressure. The oil platform at the Gullfaks C field was equipped with 22 m long skirts. The Troll A platform is founded in 330 m depth with 30 m long skirts and is the world's biggest gravity platform.
Research and development
The Norwegian Geotechnical Institute (NGI) has been heavily involved with the concept development, design and installation of suction anchors from the start. The project "Application of offshore bucket foundations and anchors in lieu of conventional designs" (1994-1998) was sponsored by 15 international petroleum and industry companies and was one of the most important studies. The project “Skirted foundations and anchors in clay” (1997-1999) was sponsored by 19 international companies organized through the Offshore Technology Research Center (OTRC) in the US, and the project “Skirted offshore foundations and anchors in sand” (1997-2000) was sponsored by 8 international companies. The main conclusions from the projects were presented in the 1999 OTC paper no 10824.
An industry sponsored study on the design and analysis of deepwater anchors in soft clay was completed in 2003, where NGI participated together with OTRC and Centre for Offshore Foundation Systems (COFS) in Australia. The overall objective was to provide the API Geotechnical Workgroup (RG7) and the Deepstar Joint Industry Project VI with background, data and other information needed to develop a widely applicable recommended practice for the design and installation of deepwater anchors.
The Norwegian classification society DNV (Det Norske Veritas), active worldwide in risk analysis and safety evaluation of special constructions, has produced a recommended practice report on the design procedures for suction anchors which is based on close cooperation with NGI. The main information from the project was presented in the 2006 OTC paper no 18038.
In 2002 NGI established the subsidiary NGI Inc in Houston. The subsidiary has since been awarded the detailed geotechnical design for more than 15 suction anchor projects in the Gulf of Mexico, and among these the challenging Mad Dog Spar project involving design of anchors located in old slide deposits below the Sigsbee Escarpment. For further information reference can be made to the 2006 OTC papers no 17949 and 17950.
See also
, a temporary water-excluding structure built in place, sometimes surrounding a working area as does an open caisson.
, for information on geotechnical considerations.
References
Offshore engineering | Suction caisson | Engineering | 1,735 |
13,416,327 | https://en.wikipedia.org/wiki/Accident-proneness | Accident-proneness is the idea that some people have a greater predisposition than others to experience accidents, such as car crashes and industrial injuries. It may be used as a reason to deny any insurance on such individuals.
Early work
The early work on this subject dates back to 1919, in a study by Greenwood and Woods, who studied workers at a British munitions factory and found that accidents were unevenly distributed among workers, with a relatively small proportion of workers accounting for most of the accidents.
Further work on accident-proneness was carried out in the 1930s and 1940s.
Present study
The subject is still being studied actively. Research into accident-proneness is of great interest in safety engineering, where human factors such as pilot error, or errors by nuclear plant operators, can have massive effects on the reliability and safety of a system.
One of the areas of most interest and more profound research is aeronautics, where accidents have been reviewed from psychological and human factors, to mechanical and technical failures. Many conclusive studies have presented that a human factor has great influence on the results of those occurrences.
Statistical evidence
Statistical evidence clearly demonstrates that different individuals can have different rates of accidents from one another; for example, young male drivers are the group at highest risk for being involved in car accidents. Substantial variation in personal accident rates also seem to occur between individuals.
Doubt
A number of studies have cast doubt, though, on whether accident-proneness actually exists as a "distinct, persistent and independently verifiable" physiological or psychological syndrome. Although substantial research has been devoted to this subject, no conclusive evidence seems to exist either for or against the existence of accident-proneness in this sense.
Nature and causes
The exact nature and causes of accident-proneness, assuming that it exists as a distinct entity, are unknown. Factors which have been considered as associated with accident-proneness have included absent-mindedness, clumsiness, carelessness, impulsivity, predisposition to risk-taking, and unconscious desires to create accidents as a way of achieving secondary gains.
Broad studies on the speed and accuracy using a specially designed test sheet of finding a specific figure on various people, such as Japanese, Brazil-born Japanese, Chinese, Russian, Spanish, Filipino, Thai, and Central American with different educational backgrounds. The studies have revealed that educational background or study experience is the key factor of concentration capability. Screening new employees using this test gave drastic decreases in work accidents in several companies.
Hypophobia
In July 1992, Behavioral Ecology published experimental research conducted by biologist Lee A. Dugatkin where guppies were sorted into "bold", "ordinary", and "timid" groups based upon their reactions when confronted by a smallmouth bass (i.e. inspecting the predator, hiding, or swimming away) after which the guppies were left in a tank with the bass. After 60 hours, 40 percent of the timid guppies and 15 percent of the ordinary guppies survived while none of the bold guppies did.
In The Handbook of the Emotions (1993), psychologist Arne Öhman studied pairing an unconditioned stimulus with evolutionarily-relevant fear-response neutral stimuli (snakes and spiders) versus evolutionarily-irrelevant fear-response neutral stimuli (mushrooms, flowers, physical representation of polyhedra, firearms, and electrical outlets) on human subjects and found that ophidiophobia and arachnophobia required only one pairing to develop a conditioned response while mycophobia, anthophobia, phobias of physical representations of polyhedra, firearms, and electrical outlets required multiple pairings and went extinct without continued conditioning while the conditioned ophidiophobia and arachnophobia were permanent. Similarly, psychologists Susan Mineka, Richard Keir, and Veda Price found that laboratory-raised rhesus macaques did not display fear if required to reach across a toy snake to receive a banana unless the macaque was shown a video of another macaque withdrawing in fright from the toy (which produced a permanent fear-response), while being shown a similar video of another macaque displaying fear of a flower produced no similar response.
Psychologist Paul Ekman cites the following anecdote recounted by Charles Darwin in The Expression of the Emotions in Man and Animals (1872) in connection with Öhman's research:
In May 1998, Behaviour Research and Therapy published a longitudinal survey by psychologists Richie Poulton, Simon Davies, Ross G. Menzies, John D. Langley, and Phil A. Silva of subjects sampled from the Dunedin Multidisciplinary Health and Development Study who had been injured in a fall between the ages of 5 and 9, compared them to children who had no similar injury, and found that at age 18, acrophobia was present in only 2 percent of the subjects who had an injurious fall but was present among 7 percent of subjects who had no injurious fall (with the same sample finding that typical basophobia was 7 times less common in subjects at age 18 who had injurious falls as children than subjects that did not).
Psychiatrists Isaac Marks and Randolph M. Nesse and evolutionary biologist George C. Williams have noted that people with systematically deficient responses to various adaptive phobias (e.g. basophobia, ophidiophobia, arachnophobia) are more temperamentally careless and more likely to receive unintentional injuries that are potentially fatal and have proposed that such deficient phobia should be classified as "hypophobia" due to its selfish genetic consequences. Nesse notes that while conditioned fear responses to evolutionarily novel dangerous objects such as electrical outlets is possible, the conditioning is slower because such cues have no prewired connection to fear, noting further that despite the emphasis of the risks of speeding and drunk driving in driver's education, it alone does not provide reliable protection against traffic collisions and that nearly one-quarter of all deaths in 2014 of people aged 15 to 24 in the United States were in traffic collisions.
In April 2006, The Indian Journal of Pediatrics published a study comparing 108 secondary education students attending a special education school that were diagnosed with attention deficit hyperactivity disorder (ADHD) or a learning disability to a control group of 87 secondary school students that found the treatment group had experienced 0.57±1.6 accidents while the control group had experienced 0.23±0.4 accidents. In June 2016, the Journal of Attention Disorders published a study comparing a survey of 13,347 subjects (ages 3 to 17) from Germany in a nationwide, representative cross-sectional health interview and examination dataset collected by the Robert Koch Institute and a survey of 383,292 child and adolescent policyholders of a German health insurance company based in Saxony and Thuringia. Using a Chi-squared test on accident data about the subjects, the study found that 15.7% of subjects were reported to have been involved in an accident requiring medical treatment during the previous 12 months, while the percentage of ADHD subjects that had been involved in an accident was 23% versus 15.3% among the non-ADHD group and that the odds ratio for accidents was 1.6 for ADHD subjects compared to those without. Of the subjects in both samples diagnosed with ADHD (653 subjects and 18,741 policyholders respectively), approximately three-quarters of cases in both surveys were male (79.8% and 73.3% respectively).
In March 2016, Frontiers in Psychology published a survey of 457 post-secondary student Facebook users (following a face validity pilot of another 47 post-secondary student Facebook users) at a large university in North America showing that the severity of ADHD symptoms had a statistically significant positive correlation with Facebook usage while driving a motor vehicle and that impulses to use Facebook while driving were more potent among male users than female users. In January 2014, Accident Analysis & Prevention published a meta-analysis of 16 studies examining the relative risk of traffic collisions for drivers with ADHD, finding an overall relative risk estimate of 1.36 without controlling for exposure, a relative risk estimate of 1.29 when controlling for publication bias, a relative risk estimate of 1.23 when controlling for exposure, and a relative risk estimate of 1.86 for ADHD drivers with oppositional defiant disorder and/or conduct disorder comorbidities. In June 2021, Neuroscience & Biobehavioral Reviews published a systematic review of 82 studies that all confirmed or implied elevated accident-proneness in ADHD patients and whose data suggested that the type of accidents or injuries and overall risk changes in ADHD patients over the lifespan.
In November 1999, Biological Psychiatry published a literature review by psychiatrists Joseph Biederman and Thomas Spencer on the pathophysiology of ADHD that found the average heritability estimate of ADHD from twin studies to be 0.8, while a subsequent family, twin, and adoption studies literature review published in Molecular Psychiatry in April 2019 by psychologists Stephen Faraone and Henrik Larsson that found an average heritability estimate of 0.74. Additionally, Randolph M. Nesse has argued that the 5:1 male-to-female sex ratio in the epidemiology of ADHD suggests that ADHD may be the end of a continuum where males are overrepresented at the tails, citing clinical psychologist Simon Baron-Cohen's suggestion for the sex ratio in the epidemiology of autism as an analogue. Despite critique about its limited scope, methodology, and atheoretical character, the Big Five personality traits model (which includes conscientiousness) is well-established and well-replicated, and it has been suggested that the Big Five may have distinct biological substrates.
See also
Accident analysis
Accident insurance
Congenital insensitivity to pain
Counterphobic attitude
Developmental coordination disorder § Associated disorders
Diathesis–stress model
Effects of the car on societies
Human factors and ergonomics
Lead–crime hypothesis
Passive–aggressive behavior
Traffic collision
References
Bundled references
Further reading
Safety
Safety engineering
Risk
Epidemiology
Accidents | Accident-proneness | Engineering,Environmental_science | 2,066 |
4,001,289 | https://en.wikipedia.org/wiki/Grain%20trade | The grain trade refers to the local and international trade in cereals such as wheat, barley, maize, and rice, and other food grains. Grain is an important trade item because it is easily stored and transported with limited spoilage, unlike other agricultural products. Healthy grain supply and trade is important to many societies, providing a caloric base for most food systems as well as important role in animal feed for animal agriculture.
The grain trade is as old as agricultural settlement, identified in many of the early cultures that adopted sedentary farming. Major societal changes have been directly connected to the grain trade, such as the fall of the Roman Empire. From the early modern period onward, grain trade has been an important part of colonial expansion and international power dynamics. The geopolitical dominance of countries like Australia, the United States, Canada and the Soviet Union during the 20th century was connected with their status as grain surplus countries.
More recently, international commodity markets have been an important part of the dynamics of food systems and grain pricing. Speculation, as well as other compounding production and supply factors leading up to the 2007–2008 financial crises, created rapid inflation of grain prices during the 2007–2008 world food price crisis. More recently, the dominance of Ukraine and Russia in grain markets such as wheat meant that the Russian invasion of Ukraine in 2022 caused increased fears of a global food crises in 2022. Changes to agriculture caused by climate change are expected to have cascading effects on global grain markets.
History
The grain trade is probably nearly as old as grain growing, going back the Neolithic Revolution (around 9,500 BCE). Wherever there is a scarcity of land (e.g. cities), people must bring in food from outside to sustain themselves, either by force or by trade. However, many farmers throughout history (and today) have operated at the subsistence level, meaning they produce for household needs and have little leftover to trade. The goal for such farmers is not to specialize in one crop and grow a surplus of it, but rather to produce everything his family needs and become self-sufficient. Only in places and eras where production is geared towards producing a surplus for trade (commercial agriculture), does a major grain trade become possible.
Classical world
In the ancient world, grain regularly flowed from the hinterlands to the cores of great empires: maize in ancient Mexico, rice in ancient China, and wheat and barley in the ancient Near East. With this came improving technologies for storing and transporting grains; the Hebrew Bible makes frequent mention of ancient Egypt's massive grain silos.
Merchant shipping was important for the carriage of grain in the classical period (and continues to be so). A Roman merchant ship could carry a cargo of grain the length of the Mediterranean for the cost of moving the same amount 15 miles by land. The large cities of the time could not exist without the supplies delivered. For example, in the first three centuries AD, Rome consumed about 150,000 tons of Egyptian grain each year.
During the classical age, the unification of China and the pacification of the Mediterranean basin by the Roman Empire created vast regional markets in commodities at either end of Eurasia. The grain supply to the city of Rome was considered to be of the utmost strategic importance to Roman generals and politicians.
In Europe, with the fall of the Roman Empire and the rise of feudalism, many farmers were reduced to a subsistence level, producing only enough to fulfill their obligation to their lord and the Church, with little for themselves, and even less for trading. The little that was traded was moved around locally at regular fairs.
Early modern and modern expansion
A massive expansion in the grain trade occurred when Europeans were able to bring millions of square kilometers of new land under cultivation in the Americas, Russia, and Australia, an expansion starting in the fifteenth and lasting into the twentieth century. In addition, the consolidation of farmland in Britain and Eastern Europe, and the development of railways and the steamship shifted trade from local to more international patterns.
During this time, debate over tariffs and free trade in grain was fierce. Poor industrial workers relied on cheap bread for sustenance, but farmers wanted their government to create a higher local price to protect them from cheap foreign imports, resulting in legislation such as Britain's Corn Laws.
As Britain and other European countries industrialized and urbanized, they became net importers of grain from the various breadbaskets of the world. In many parts of Europe, as serfdom was abolished, great estates were accompanied by many inefficient smallholdings, but in the newly colonized regions massive operations were available to not only great nobles, but also to the average farmer. In the United States and Canada, the Homestead Act and the Dominion Lands Act allowed pioneers on the western plains to gain tracts of (1/4 of a square mile) or more for little or no fee. This moved grain growing, and hence trading, to a much more massive scale. Huge grain elevators were built to take in farmers' produce and move it out via the railways to port. Transportation costs were a major concern for farmers in remote regions, however, and any technology that allowed the easier movement of grain was of great assistance; meanwhile, farmers in Europe struggled to remain competitive while operating on a much smaller scale.
20th century changes
In the 1920s and 1930s, farmers in Australia and Canada reacted against the pricing power of the large grain-handling and shipping companies. Their governments created the Australian Wheat Board and the Canadian Wheat Board as monopsony marketing boards, buying all the wheat in those countries for export. Together, those two boards controlled a large percentage of the world's grain trade in the mid-20th century. Additionally, farmers' cooperatives such the wheat pools became a popular alternative to the major grain companies.
At the same time in the Soviet Union and soon after in China, disastrous collectivization programs effectively turned the world's largest farming nations into net importers of grain.
By the second half of the 20th century, the grain trade was divided between a few state-owned and privately owned giants. The state giants were Exportkhleb of the Soviet Union, the Canadian Wheat Board, the Australian Wheat Board, the Australian Barley Board, and so on. The largest private companies, known as the "big five", were Cargill, Continental, Louis Dreyfus, Bunge, and Andre, an older European company not to be confused with the more recent André Maggi Group from Brazil.
In 1972, the Soviet Union's wheat crop failed. To prevent shortages in their country, Soviet authorities were able to buy most of the surplus American harvest through private companies without the knowledge of the United States government. This drove up prices across the world, and was dubbed the "great grain robbery" by critics, leading to greater public attention being paid by Americans to the large trading companies.
By contrast, in 1980, the US government attempted to use its food power to punish the Soviet Union for its invasion of Afghanistan with an embargo on grain exports. This was seen as a failure in terms of foreign policy (the Soviets made up the deficit on the international market), and negatively impacted American farmers.
Modern trade
Since the Second World War, the trend in North America has been toward further consolidation of already vast farms. Transportation infrastructure has also promoted more economies of scale. Railways have switched from coal to diesel fuel, and introduced hopper car to carry more mass with less effort. The old wooden grain elevators have been replaced by massive concrete inland terminals, and rail transportation has retreated in the face of ever larger trucks.
Modern issues affecting the grain trade include food security concerns, the increasing use of biofuels, the controversy over how to properly store and separate genetically modified and organic crops, the local food movement, the desire of developing countries to achieve market access in industrialized economies, climate change and drought shifting agricultural patterns, and the development of new crops.
Price volatility and protections
Price volatility greatly effects countries that are dependent on grain imports, such as certain countries in the MENA region. "Price volatility is a life-and-death issue for many people around the world" warned ICTSD Senior Fellow Sergio Marchi. "Trade policies need to incentivize investment in developing country agriculture, so that poor farmers can build resistance to future price shocks". Two major price volatility crises in the early 21st century, during the 2007–2008 world food price crisis and 2022 food crises, have had major negative effects on grain prices globally. Climate change is expected to create major agricultural failures, that will continue to create volatile food price markets especially for bulk goods like grains.
Protection against international market prices has been an important part of how some countries have responded to the volitility of market prices. For example, farmers in the European Union, United States and Japan are protected by agricultural subsidies. The European Union's programs are organized under the Common Agricultural Policy. The agricultural policy of the United States is demonstrated through the "farm bill", while rice production in Japan is also protected and subsidized. Farmers in other countries has attempted to have these policies disallowed by the World Trade Organization, or attempted to negotiate them away though the Cairns Group, at the same time the wheat boards have been reformed and many tariffs have been greatly reduced, leading to a further globalization of the industry. For example, in 2008 Mexico was required by the North American Free Trade Agreement (NAFTA) to remove its tariffs on US and Canadian maize.
Similarly, protections in other contexts, such as guaranteed prices for grains in India, have been an important lifeline for small farmers in the context of further industrialization of agriculture. When the BJP Party government of Narendra Modi attempted to repeal guaranteed prices for farmers on key grains like wheat, farmers throughout the country rose in protest.
See also
Bread of Ukraine
Monoculture
Cash crop
References
Works cited
W. Broehl, Cargill Going Global, University of New England Press, 1998.
W. Broehl, Cargill Trading the World's Grain, University of New England Press, 1992.
Chad J. Mitcham, China's Economic Relations with the West and Japan, 1949-79: Grain, Trade and Diplomacy, Routledge, 2005.
Dan Morgan, Merchants of Grain, Viking, 1997.
W.E. Morriss, Chosen Instrument: A History of the Canadian Wheat Board, the McIvor Years, Canadian Wheat Board, 1987
Trade by commodity
Commodity markets
Agricultural economics
History of agriculture
Intensive farming | Grain trade | Chemistry | 2,138 |
36,326,574 | https://en.wikipedia.org/wiki/Ca%C3%B1o%20Delgadito%20virus | Caño Delgadito virus (CADV) is a hantavirus present in Venezuela. Its natural reservoir is Alston's cotton rat. Transmission among cotton rats appears to be horizontal. While human disease caused by CADV has not yet been identified, it has been isolated from oropharyngeal swabs and urine of infected cotton rats, indicating that it may be infectious to humans in the same manner as other hantaviruses, via inhalation of aerosolized droplets of saliva, respiratory secretions, or urine. CADV was discovered in the 1990s from rodent species in the Llanos in Venezuela.
References
Hantaviridae | Caño Delgadito virus | Biology | 137 |
78,184,272 | https://en.wikipedia.org/wiki/GSX2 | GS homeobox 2 (GSX2) is a protein encoded by a gene of the same name, located on chromosome 4 in humans, and on chromosome 5 in mice.
It is especially important to regulating the development of the brain, particularly during embryonic development. Mutations have been linked to a variety of neurological disorders that can cause intellectual disability, dystonia (difficulty with movement) and seizures.
Structure
GSX2 is a polypeptide chain consisting of 304 amino acids, with a molecular weight of 32,031.
Function
GSX2 is a homeobox transcription factor essential for mammalian forebrain development, particularly in specifying and patterning the basal ganglia. It binds specific DNA sequences, crucial for dorsal-ventral patterning of the telencephalon and specifying neural progenitors in the ventral forebrain.
GSX2 acts within a temporal framework, initially guiding the specification of striatal projection neurons during early lateral ganglionic eminence (LGE) neurogenesis, and later supporting olfactory bulb interneuron development. Mutations in GSX2 have been linked to basal ganglia dysgenesis in humans, resulting in severe neurological symptoms, including dystonia and intellectual impairment.
GSX2 is highly expressed in neural progenitors within the ganglionic eminences, precursors to the basal ganglia and olfactory structures. It promotes neurogenesis while inhibiting differentiation into oligodendrocytes, a type of glial cell in the central nervous system.
Clinical significance
Neurodevelopmental disorders
Mutations in GSX2 have been linked to severe neurodevelopmental disorders characterized by specific brain malformations. This includes cases of basal ganglia agenesis, leading to symptoms such as a slowly progressive decline in neurologic function, dystonia, and intellectual impairment.
Diencephalic-mesencephalic junction dysplasia syndrome
A single nucleotide polymorphism and missense mutation in GSX2, rs1578004339, has been found to be a pathogenic cause of diencephalic-mesencephalic junction dysplasia syndrome, a neurodevelopmental disorder characterised by severe intellectual disability and seizures.
References
Genes on human chromosome 4
Developmental genes and proteins | GSX2 | Biology | 475 |
303,990 | https://en.wikipedia.org/wiki/Mercer%27s%20theorem | In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions. This theorem, presented in , is one of the most notable results of the work of James Mercer (1883–1932). It is an important theoretical tool in the theory of integral equations; it is used in the Hilbert space theory of stochastic processes, for example the Karhunen–Loève theorem; and it is also used in the reproducing kernel Hilbert space theory where it characterizes a symmetric positive-definite kernel as a reproducing kernel.
Introduction
To explain Mercer's theorem, we first consider an important special case; see below for a more general formulation.
A kernel, in this context, is a symmetric continuous function
where for all .
K is said to be a positive-definite kernel if and only if
for all finite sequences of points x1, ..., xn of [a, b] and all choices of real numbers c1, ..., cn. Note that the term "positive-definite" is well-established in literature despite the weak inequality in the definition.
The fundamental characterization of stationary positive-definite kernels (where ) is given by Bochner's theorem. It states that a continuous function is positive-definite if and only if it can be expressed as the Fourier transform of a finite non-negative measure :
This spectral representation reveals the connection between positive definiteness and harmonic analysis, providing a stronger and more direct characterization of positive definiteness than the abstract definition in terms of inequalities when the kernel is stationary, e.g, when it can be expressed as a 1-variable function of the distance between points rather than the 2-variable function of the positions of pairs of points.
Associated to K is a linear operator (more specifically a Hilbert–Schmidt integral operator when the interval is compact) on functions defined by the integral
We assume can range through the space
of real-valued square-integrable functions L2[a, b]; however, in many cases the associated RKHS can be strictly larger than L2[a, b]. Since TK is a linear operator, the eigenvalues and eigenfunctions of TK exist.
Theorem. Suppose K is a continuous symmetric positive-definite kernel. Then there is an orthonormal basis
{ei}i of L2[a, b] consisting of eigenfunctions of TK such that the corresponding
sequence of eigenvalues {λi}i is nonnegative. The eigenfunctions corresponding to non-zero eigenvalues are continuous on [a, b] and K has the representation
where the convergence is absolute and uniform.
Details
We now explain in greater detail the structure of the proof of
Mercer's theorem, particularly how it relates to spectral theory of compact operators.
The map K ↦ TK is injective.
TK is a non-negative symmetric compact operator on L2[a,b]; moreover K(x, x) ≥ 0.
To show compactness, show that the image of the unit ball of L2[a,b] under TK is equicontinuous and apply Ascoli's theorem, to show that the image of the unit ball is relatively compact in C([a,b]) with the uniform norm and a fortiori in L2[a,b].
Now apply the spectral theorem for compact operators on Hilbert
spaces to TK to show the existence of the
orthonormal basis {ei}i of
L2[a,b]
If λi ≠ 0, the eigenvector (eigenfunction) ei is seen to be continuous on [a,b]. Now
which shows that the sequence
converges absolutely and uniformly to a kernel K0 which is easily seen to define the same operator as the kernel K. Hence K=K0 from which Mercer's theorem follows.
Finally, to show non-negativity of the eigenvalues one can write and expressing the right hand side as an integral well-approximated by its Riemann sums, which are non-negative
by positive-definiteness of K, implying , implying .
Trace
The following is immediate:
Theorem. Suppose K is a continuous symmetric positive-definite kernel; TK has a sequence of nonnegative
eigenvalues {λi}i. Then
This shows that the operator TK is a trace class operator and
Generalizations
Mercer's theorem itself is a generalization of the result that any symmetric positive-semidefinite matrix is the Gramian matrix of a set of vectors.
The first generalization replaces the interval [a, b] with any compact Hausdorff space and Lebesgue measure on [a, b] is replaced by a finite countably additive measure μ on the Borel algebra of X whose support is X. This means that μ(U) > 0 for any nonempty open subset U of X.
A recent generalization replaces these conditions by the following: the set X is a first-countable topological space endowed with a Borel (complete) measure μ. X is the support of μ and, for all x in X, there is an open set U containing x and having finite measure. Then essentially the same result holds:
Theorem. Suppose K is a continuous symmetric positive-definite kernel on X. If the function κ is L1μ(X), where κ(x)=K(x,x), for all x in X, then there is an orthonormal set
{ei}i of L2μ(X) consisting of eigenfunctions of TK such that corresponding
sequence of eigenvalues {λi}i is nonnegative. The eigenfunctions corresponding to non-zero eigenvalues are continuous on X and K has the representation
where the convergence is absolute and uniform on compact subsets of X.
The next generalization deals with representations of measurable kernels.
Let (X, M, μ) be a σ-finite measure space. An L2 (or square-integrable) kernel on X is a function
L2 kernels define a bounded operator TK by the formula
TK is a compact operator (actually it is even a Hilbert–Schmidt operator). If the kernel K is symmetric, by the spectral theorem, TK has an orthonormal basis of eigenvectors. Those eigenvectors that correspond to non-zero eigenvalues can be arranged in a sequence {ei}i (regardless of separability).
Theorem. If K is a symmetric positive-definite kernel on (X, M, μ), then
where the convergence in the L2 norm. Note that when continuity of the kernel is not assumed, the expansion no longer converges uniformly.
Mercer's condition
In mathematics, a real-valued function K(x,y) is said to fulfill Mercer's condition if for all square-integrable functions g(x) one has
Discrete analog
This is analogous to the definition of a positive-semidefinite matrix. This is a matrix of dimension , which satisfies, for all vectors , the property
.
Examples
A positive constant function
satisfies Mercer's condition, as then the integral becomes by Fubini's theorem
which is indeed non-negative.
See also
Kernel trick
Representer theorem
Reproducing kernel Hilbert space
Spectral theory
Notes
References
Adriaan Zaanen, Linear Analysis, North Holland Publishing Co., 1960,
Ferreira, J. C., Menegatto, V. A., Eigenvalues of integral operators defined by smooth positive definite kernels, Integral equation and Operator Theory, 64 (2009), no. 1, 61–81. (Gives the generalization of Mercer's theorem for metric spaces. The result is easily adapted to first countable topological spaces)
Konrad Jörgens, Linear integral operators, Pitman, Boston, 1982,
Richard Courant and David Hilbert, Methods of Mathematical Physics, vol 1, Interscience 1953,
Robert Ash, Information Theory, Dover Publications, 1990,
,
H. König, Eigenvalue distribution of compact operators, Birkhäuser Verlag, 1986. (Gives the generalization of Mercer's theorem for finite measures μ.)
Theorems in functional analysis | Mercer's theorem | Mathematics | 1,754 |
4,784,171 | https://en.wikipedia.org/wiki/Hole%20punching%20%28networking%29 | Hole punching (or sometimes punch-through) is a technique in computer networking for establishing a direct connection between two parties in which one or both are behind firewalls or behind routers that use network address translation (NAT). To punch a hole, each client connects to an unrestricted third-party server that temporarily stores external and internal address and port information for each client. The server then relays each client's information to the other, and using that information each client tries to establish direct connection; as a result of the connections using valid port numbers, restrictive firewalls or routers accept and forward the incoming packets on each side.
Hole punching does not require any knowledge of the network topology to function. ICMP hole punching, UDP hole punching and TCP hole punching respectively use Internet Control Message, User Datagram and Transmission Control Protocols.
Overview
Networked devices with public or globally accessible IP addresses can create connections between one another easily. Clients with private addresses may also easily connect to public servers, as long as the client behind a router or firewall initiates the connection. However, hole punching (or some other form of NAT traversal) is required to establish a direct connection between two clients that both reside behind different firewalls or routers that use network address translation (NAT).
Both clients initiate a connection to an unrestricted server, which notes endpoint and session information including public IP and port along with private IP and port. The firewalls also note the endpoints in order to allow responses from the server to pass back through. The server then sends each client's endpoint and session information to the other client, or peer. Each client tries to connect to its peer through the specified IP address and port that the peer's firewall has opened for the server. The new connection attempt punches a hole in the client's firewall as the endpoint now becomes open to receive a response from its peer. Depending on network conditions, one or both clients might receive a connection request. Successful exchange of an authentication nonce between both clients indicates the completion of a hole punching procedure.
Examples
VoIP products, online gaming applications, and P2P networking software all use hole punching.
Telephony software Skype uses hole punching to allow users to communicate with one or more users audibly.
Fast-paced online multi-player games may use a hole punching technique or require users to create a permanent firewall pinhole in order to reduce network latency.
VPN applications such as Hamachi, ZeroTier, and Tailscale utilize hole punching to allow users to connect directly to subscribed devices behind firewalls.
Decentralized peer-to-peer file sharing software relies on hole punching for file distribution.
Requirements
Reliable hole punching requires consistent endpoint translation, and for multiple levels of NATs, hairpin translation.
When an outbound connection from a private endpoint passes through a firewall, it receives a public endpoint (public IP address and port number), and the firewall translates traffic between them. Until the connection is closed, the client and server communicate through the public endpoint, and the firewall directs traffic appropriately. Consistent endpoint translation reuses the same public endpoint for a given private endpoint, instead of allocating a new public endpoint for every new connection.
Hairpin translation creates a loopback connection between two of its own private endpoints when it recognizes that the destination endpoint is itself. This functionality is necessary for hole punching only when used within a multiple-layered NAT.
See also
Port Control Protocol (PCP)
NAT Port Mapping Protocol (NAT-PMP)
Internet Gateway Device Protocol (UPnP IGD)
Port knocking
Session Initiation Protocol
STUN
References
External links
How NAT traversal works
Computer network security | Hole punching (networking) | Engineering | 769 |
60,016,083 | https://en.wikipedia.org/wiki/Signaturen | Signaturen is a residential high-rise building in Tønsberg city, Norway. The building is situated in Kaldnes on the northernmost part of the island Nøtterøy in Tønsberg municipality. At tall, it is Vestfold county's tallest building.
The building is owned by the Norwegian residential property developer Selvaag Bolig and completed in early 2019, but was opened for new residents already in December 2018. The construction started in 2017.
The residential building has 13 floors and 23 apartments. The top floor can be reached by either stairs or a high-speed Kone traction elevator. The tower has a neo-futuristic architectural style.
References
External links
Signaturen on selvaagbolig.no
Residential skyscrapers
Buildings and structures completed in 2018
Buildings and structures completed in 2019
Residential buildings completed in 2019
Modernist architecture in Norway
Postmodern architecture
High-tech architecture
Neo-futurist architecture | Signaturen | Engineering | 188 |
20,172,645 | https://en.wikipedia.org/wiki/M110%20155%20mm%20projectile | The M110 155 mm projectile is an artillery shell used by the U.S. Army and U.S. Marine Corps. The M110 was originally designed as a chemical artillery round to deliver blister agents via howitzer as a replacement for the World War I-era 75 mm chemical projectiles. The design was later repurposed as a white phosphorus smoke round for marking, signaling, and screening purposes. The white phosphorus variants of the shell also have a secondary, incendiary effect.
Original design
Officially designated projectile, 155 mm howitzer, M110, the original round was a 26.8-inch (68.1 cm) steel shell with a rotating band near its base and a burster rod down its center. The original shell typically contained of sulfur mustard (H) or distilled sulfur mustard (HD), which would fill the hollow space in the shell. As early as the 1960s, a white phosphorus version was created under the same designation with of white phosphorus filler. Both versions were designed for employment by the M114 howitzer and the M44 Self-Propelled Howitzer for use as terrain denial (in the case of the mustard-filled versions), target-marking, and obscuration (in the case of the white phosphorus versions.)
Design variants and markings
M110
The original version of the shell came in two variations, one filled with mustard (HD) (projectile, gas, persistent, HD, 155 mm howitzer, M110) and one filled with white phosphorus (WP) (projectile, smoke, WP, 155 mm gun, M110). To distinguish between the two, the HD versions were gray marked with two, horizontal, green bands, like most other chemical artillery shells. The WP versions were gray with a single, horizontal, yellow band, as is standard for military smoke munitions.
Both versions are now considered obsolete, with the WP version seeing updated versions in later incarnations of the shell.
The HD version has not been produced since the 1960s and was never used in combat. Remaining stockpiles of the HD version are in the process of being destroyed in accordance with the 1997 Chemical Weapons Convention.
M110A1
The first upgrade to the M110 shell is only slightly modified from the original, maintaining the white phosphorus filler weight of the original with slight modifications to the release mechanisms to make the shell more reliable. It is primarily used for signaling and small-scale screening missions. The M110A1 is gray with a single, yellow, horizontal band, which is standard for military smoke munitions.
M110A2
The second upgrade to the M110 shell is more dramatically modified from the other two variants, with thinner casing to increase the amount of filler that can be placed in the shell. The M110A2 contains of white phosphorus, which increases the duration of the smoke it produces. This change makes the M110A2 ideal for target marking and large-scale obscuration missions. The M110A2 is gray with a single, yellow, horizontal band, which is standard for military smoke munitions.
Similar projectiles
M104 155 mm projectile
M121 155 mm projectile
M687 155 mm projectile
References
155 mm artillery shells
Chemical weapon delivery systems
Chemical weapons of the United States | M110 155 mm projectile | Chemistry | 664 |
13,780,863 | https://en.wikipedia.org/wiki/Quyllurit%27i | Quyllurit'i or Qoyllur Rit'i (Quechua quyllu rit'i, quyllu bright white, rit'i snow, "bright white snow,") is a syncretic religious festival held annually at the Sinakara Valley in the southern highlands Cusco Region of Peru. Local indigenous people of the Andes know this festival as a native celebration of the stars. In particular they celebrate the reappearance of the Pleiades constellation, known in Quechua as Qullqa, or "storehouse", and associated with the upcoming harvest and New Year. The Pleiades disappears from view in April and reappears in June. The new year is marked by indigenous people of the Southern Hemisphere on the Winter Solstice in June, and it is also a Catholic festival. The people have celebrated this period of time for hundreds if not thousands of years. The pilgrimage and associated festival was inscribed in 2011 on the UNESCO Intangible Cultural Heritage Lists.
According to the Catholic Church, the festival is in honor of the Lord of Quyllurit'i (, ) and it originated in the late 18th century. The young native herder Mariano Mayta befriended a mestizo boy named Manuel on the mountain Qullqipunku. Thanks to Manuel, Mariano's herd prospered, so his father sent him to Cusco to buy a new shirt for Manuel. Mariano could not find anything similar, because that kind of cloth was sold only to the archbishop. Learning of this, the bishop of Cusco sent a party to investigate. When they tried to capture Manuel, he was transformed into a bush with an image of Christ crucified hanging from it. Thinking the archbishop's party had harmed his friend, Mariano died on the spot. He was buried under a rock, which became a place of pilgrimage known as the Lord of Quyllurit'i, or "Lord of Star (Brilliant) Snow." An image of Christ was painted on this boulder.
The Quyllurit'i festival attracts thousands of indigenous people from the surrounding regions, made up of Paucartambo groups (Quechua speakers) from the agricultural regions to the northwest of the shrine, and Quispicanchis (Aymara speakers) from the pastoral (herders) regions to the southeast. Both moieties make an annual pilgrimage to the feast, bringing large troupes of dancers and musicians. There are four groups of participants with particular roles: ch'unchu, qulla, ukuku, and machula. Attendees increasingly have included middle-class Peruvians and foreign tourists.
The festival takes place in late May or early June, to coincide with the full moon. It falls one week before the Christian feast of Corpus Christi. Events include several processions of holy icons and dances in and around the shrine of the Lord of Quyllurit'i. The culminating event for the indigenous non-Christian population takes place after the reappearance of Qullqa in the night sky; it is the rising of the sun after the full moon. Tens of thousands of people kneel to greet the first rays of light as the sun rises above the horizon. Until recently, the main event for the Church was carried out by ukukus, who climbed glaciers over Qullqipunku and brought back crosses and blocks of ice to place along the road to the shrine. These are believed to be medicinal with healing qualities. Due to the melting of the glacier, the ice is no longer carried down.
Origins
There are several accounts of the origins of the Quyllurit'i festival. What follows are two versions: one relates the pre-Columbian origins, and the other the Catholic Church's version as compiled by the priest of the town of Ccatca between 1928 and 1946.
Pre-Columbian origins
The Inca followed both solar and lunar cycles throughout the year. The cycle of the moon was of primary importance for the timing of both agricultural activities and associated festivals. There are many celebration of seasonal events related to animal husbandry, sowing seeds, and harvesting of crops. Important festivals such as Quyllurit'i, perhaps the most important festival given its significance and meaning, are still celebrated on the full moon.
The Quyllurit'i festival takes place at the end of a period of a few months when the Pleiades constellation, or Seven Sisters, a 7-star cluster in the Taurus constellation, disappears and reappears in the skies of the Southern Hemisphere. Its time of disappearance was marked in Inca culture by a festival for Pariacaca, the god of water and torrential rains. It occurs near the date of qarwa mita (qarwa meaning when the corn leaves are yellow).
The return of the constellation about 40 days later, called unquy mita in Quechua, was long associated in the Southern Hemisphere with the time of the coming harvest and therefore a time of abundance for the people. Incan astronomers had named the Pleiades constellation as Qullqa, or "storehouse," in their native language of Quechua.
Metaphorically, the constellation's disappearance from the night sky and reemergence approximately two months afterward is a signal that the human planes of existence have times of disorder and chaos, but also return to order.
Catholic Church origins
In the city of Cuzco in the late 17th century, the celebration of Corpus Christi reached a height under Bishop Manuel de Mollinedo y Angulo (1673–99), with processions through the city including Inca nobles in ceremonial regalia. The bishop also commissioned portraits of the nobles in their ceremonial clothes. Scholars such as Carolyn Dean have studies this evidence for its suggestions about related church rituals.
Dean believes that such early churchmen thought that such Catholic rituals could displace indigenous ones. She examines the feast of Corpus Christi and its relationship to the indigenous harvest festival at winter solstice, celebrated in early June in the Southern Hemisphere. According to the church, events of the late 18th century that included a sighting of Christ on the mountain Qullqipunku became part of myth, and the pilgrimage festival of the Lord of Quyllurit'i is still celebrated in the 21st century.
It is told that an Indian boy named Mariano Mayta used to watch over his father's herd of alpaca on the slopes of the mountain. He wandered into the snowfields of the glacier, where he encountered a mestizo boy named Manuel. They became good friends, and Manuel provided Mariano with food. When the boy did not return home for meals, Mariano's father went looking for his son. He was surprised to find his herd had increased. As a reward, he sent Mariano to Cusco to get new clothes. Mariano asked to buy some also for Manuel, who wore the same outfit every day. His father agreed, so Mariano asked Manuel for a sample in order to buy the same kind of cloth in Cusco.
Mariano was told that this refined cloth was restricted for use only by the bishop of the city. Mariano went to see the prelate, who was surprised by the request. He ordered an inquiry of Manuel, directed by the priest of Oncogate (Quispicanchi), a village close to the mountain. On June 12, 1783, the commission ascended Qullqipunku with Mariano; they found Manuel dressed in white and shining with a bright light. Blinded, they retreated, returning with a larger party. On their second try they reached the boy. But when they touched him, he was transformed into a tayanka bush (Baccharis odorata) with the crucified Christ hanging from it. Thinking the party had harmed his friend, Mariano fell dead on the spot. He was buried under the rock where Manuel had last appeared.
The tayanka tree was sent to Spain, as requested by king Charles III. As it was never returned, the Indian population of Ocongate protested. The local priest ordered a replica, which became known as Lord of Tayankani (). The burial site of Mariano attracted a great number of Indian devotees, who lit candles before the rock. Religious authorities ordered the painting of an image of Christ crucified on the rock. This image became known as Lord of Quyllurit'i (). In Quechua, quyllur means star and Rit'i means snow; thus, the term means Lord of Star Snow.
Pilgrims
The Quyllurit'i festival attracts more than 10,000 pilgrims annually, most of them indigenous peoples from rural communities in nearby regions. They are from two moieties: Quechua-speaking Paucartambo, people from agricultural communities located to the northwest of the shrine in the provinces of Cusco, Calca, Paucartambo and Urubamba; and Aymara-speaking Quispicanchis, which encompasses those living to the southeast in the provinces of Acomayo, Canas, Canchis and Quispicanchi, This geographic division also reflects social and economic distinctions, as the Quechuas of Paucartambo cultivate agricultural crops, whereas Quispicanchis is populated by the Aymara, whose lives are based on animal husbandry, especially herds of alpaca and llama.
Peasants from both moieties undertake an annual pilgrimage to the Quyllurit'i festival, with representatives of each community carrying a small image of Christ to the sanctuary. Together, these delegations include a large troupe of dancers and musicians dressed in four main styles:
Ch'unchu: wearing feathered headdresses and carrying a wood staff, the ch'unchus represent the indigenous inhabitants of the Amazon Rainforest, to the north of the sanctuary. There are several types of ch'unchu dancers; the most common is wayri ch'unchu, which comprises up to 70% of all Quyllurit'i dancers.
Qhapaq Qulla: dressed in a "waq'ullu" knitted mask, a hat, a woven sling and a llama skin, qullas represent the Aymara inhabitants of the Altiplano to the south of the sanctuary. Qulla is considered a mestizo dance style, whereas ch'unchu is regarded as indigenous.
Ukuku: clad in a dark coat and a woolen mask, the ukukus (spectacled bear) represent the role of tricksters; they speak in high-pitched voices, and play pranks, but have the serious responsibility of keeping order among the thousands of pilgrims. Some also go up to the glacier to spend the night. They cut blocks of glacier ice and carry them on their backs to their people at the festival in the valley. When melted, the water is believed to be medicinal for body and mind. It is used for holy water in the churches during the next year. In Quechua mythology, ukukus are the offspring of a woman and a bear, feared by everyone because of their supernatural strength. In these stories, the ukuku redeems itself by defeating a condenado, a cursed soul, and becoming an exemplary farmer.
Machula: wearing a mask, a humpback, and a long coat, and carrying a walking stick, machulas represent the ñawpa machus, the mythical first inhabitants of the Andes. In a similar way to the ukukus, they perform an ambiguous role in the festival, being comical as well as constabulary figures.
Quyllur Rit'i also attracts visitors from outside the Paucartambo and Quispicanchis moieties. Since the 1970s, an increasing number of middle-class mainstream Peruvians undertake the pilgrimage, some of them at a different date than more traditional pilgrims. There has also been a rapid growth in the number of North American and European tourists drawn to the indigenous festival, prompting fears that it is becoming too commercialized. The pilgrimage and associated festival were inscribed in 2011 on the UNESCO Intangible Cultural Heritage Lists.
Festival
The festival is attended by thousands of indigenous people, some of whom come from as far away as Bolivia. The Christian celebration is organized by the Brotherhood of the Lord of Quyllurit'i (), a lay organization that also keeps order during the festival. Preparations start on the feast of the Ascension, when the Lord of Quyllurit'i is carried in procession from its chapel at Mawallani 8 kilometers to its sanctuary at Sinaqara.
On the first Wednesday after Pentecost, a second procession carries a statue of Our Lady of Fatima from the Sinaqqara sanctuary to an uphill grotto to prepare for the festival. Most pilgrims arrive by Trinity Sunday, when the Blessed Sacrament is taken in procession through and around the sanctuary.
The following day, the Lord of Qoyllur Rit'i is taken in procession to the grotto of the Virgin and back. Pilgrims refer to this as the greeting between the Lord and Mary, referring to the double traditional Inca feasts of Pariacaca and Oncoy mita. (See section above.) On the night of this second day, dance troupes take turns to perform in the shrine.
At dawn on the third day, ukukus grouped by moieties climb the glaciers on Qullqipunku to retrieve crosses set on top. Some ukukus traditionally spent the night on the glacier to combat spirits. They also cut and bring back blocks of the ice, which is believed to have sacred medicinal qualities. The ukukus are considered to be the only ones capable of dealing with condenados, the cursed souls said to inhabit the snowfields. According to oral traditions, ukukus from different moieties used to engage in ritual battles on the glaciers, but this practice was banned by the Catholic Church. After a mass celebrated later this day, most pilgrims leave the sanctuary. One group carries the Lord of Quyllurit'i in procession to Tayankani before taking it back to Mawallani.
The festival precedes the official feast of Corpus Christi, held the Thursday following Trinity Sunday, but it is closely associated with it.
See also
Religion in Peru
Syncretism
Notes
Bibliography
Allen, Catherine. The Hold Life Has: Coca and Cultural Identity in an Andean Community. Washington: Smithsonian Institution Press, 1988.
Ceruti, Maria Constanza. Qoyllur Riti: etnografia de un peregrinaje ritual de raiz incaica por las altas montañas del Sur de Peru (in Spanish)
Dean, Carolyn. Inka Bodies and the Body of Christ: Corpus Christi in Colonial Cusco, Peru. Durham: Duke University Press, 1999.
Randall, Robert. "Qoyllur Rit'i, an Inca fiesta of the Pleiades: reflections on time & space in the Andean world," Bulletin de l'Institut Français d'Etudes Andines 9 (1–2): 37–81 (1982).
Randall, Robert. "Return of the Pleiades". Natural History 96 (6): 42–53 (June 1987).
Sallnow, Michael. Pilgrims of the Andes: regional cults in Cusco. Washington: Smithsonian Institution Press, 1987.
External links
Seti Gershberg, "Qoyllur Riti: An Inca Festival Celebrating the Stars", May 2013, The Path of the Sun website
Adrian Locke, "From Ice to Icon: El Señor de Qoyllur Rit'i as symbol of native Andean Catholic worship", Essex College (UK)
Vicente Revilla, photographer: Qoyllur Rit'i: In Search of the Lord of the Snow Star, online exhibit, W.E.B. Du Bois Library, University of Massachusetts Amherst, October 1999
Catholic holy days
Religion in Peru
Festivals in Peru
Intangible Cultural Heritage of Humanity
Indigenous culture of the Andes
Tourist attractions in Cusco Region
Christian festivals in South America
July
Cultural heritage of Peru
Winter solstice | Quyllurit'i | Astronomy | 3,300 |
38,202,223 | https://en.wikipedia.org/wiki/Critical%20distance%20%28animals%29 | Critical distance for an animal is the distance a human or an aggressor animal has to approach in order to trigger a defensive attack of the first animal.
The concept was introduced by Swiss zoologist Heini Hediger in 1954, along with other space boundaries for an animal, such as flight distance (run boundary), critical distance (attack boundary), personal space (distance separating members of non-contact species, as a pair of swans), and social distance (intraspecies communication distance).
Hediger developed and applied these distance concepts in the context of designing zoos.
As the critical distance is smaller than the flight distance, there are only a few scenarios in the wild when the critical distance may be encroached. As an example, critical distance may be reached if an animal noticed an intruder too late or the animal was "cornered" to a place of no escape.
Edward T. Hall, a cultural anthropologist, reasoned that in humans the flight distance and critical distance have been eliminated in human reactions, and thus proceeded to determine modified criteria for space boundaries in human interactions.
See also
Escape distance
Fight-or-flight response
Flight zone
Personal space
Territoriality
References
Environmental psychology
Ethology
Biological interactions | Critical distance (animals) | Biology,Environmental_science | 249 |
2,685,903 | https://en.wikipedia.org/wiki/Metope | A metope (; ) is a rectangular architectural element of the Doric order, filling the space between triglyphs in a frieze, a decorative band above an architrave.
In earlier wooden buildings the spaces between triglyphs were first open, and later the free spaces in between triglyphs were closed with metopes; however, metopes are not load-bearing part of a building.
Earlier metopes are plain, but later metopes were painted or ornamented with reliefs. The painting on most metopes has been lost, but sufficient traces remain to allow a close idea of their original appearance.
In terms of structure, metopes were made out of clay or stone. A stone metope may be carved from a single block with a triglyph (or triglyphs), or they may be cut separately and slide into slots in the triglyph blocks as at the Temple of Aphaea. Sometimes the metopes and friezes were cut from different stone, so as to provide color contrast. Although they tend to be close to square in shape, some metopes are noticeably larger in height or in width. They may also vary in width within a single structure to allow for corner contraction, an adjustment of the column spacing and arrangement of the Doric frieze in a temple to make the design appear more harmonious.
Some of the earliest surviving examples are stone metopes from a peripteral temple at Mycenae, ca. late 7th century BC, and painted clay metopes from Thermus, ca. early 6th century BC. The high-point of relief sculpture on metopes is exemplified by the 92 metopes of the Parthenon, metopes of the temple of Zeus at Olympia, together with the metopes of Temple C at Selinus.
Gallery
See also
Classical order
References
External links
Ancient Greek architecture
Ancient Greek sculpture
Ancient Roman architectural elements
Ancient Roman sculpture
Columns and entablature
Architectural sculpture | Metope | Technology | 414 |
171,964 | https://en.wikipedia.org/wiki/Environmental%20ethics | In environmental philosophy, environmental ethics is an established field of practical philosophy "which reconstructs the essential types of argumentation that can be made for protecting natural entities and the sustainable use of natural resources." The main competing paradigms are anthropocentrism, physiocentrism (called ecocentrism as well), and theocentrism. Environmental ethics exerts influence on a large range of disciplines including environmental law, environmental sociology, ecotheology, ecological economics, ecology and environmental geography.
There are many ethical decisions that human beings make with respect to the environment. These decision raise numerous questions. For example:
Should humans continue to clear cut forests for the sake of human consumption?
What species or entities ought to be considered for their own sake, independently of its contribution to biodiversity and other extrinsic goods?
Why should humans continue to propagate its species, and life itself?
Should humans continue to make gasoline-powered vehicles?
What environmental obligations do humans need to keep for future generations?
Is it right for humans to knowingly cause the extinction of a species for the convenience of humanity?
How should humans best use and conserve the space environment to secure and expand life?
What role can Planetary Boundaries play in reshaping the human-earth relationship?
The academic field of environmental ethics grew up in response to the works of Rachel Carson and Murray Bookchin and events such as the first Earth Day in 1970, when environmentalists started urging philosophers to consider the philosophical aspects of environmental problems. Two papers published in Science had a crucial impact: Lynn White's "The Historical Roots of our Ecologic Crisis" (March 1967) and Garrett Hardin's "The Tragedy of the Commons" (December 1968). Also influential was Garett Hardin's later essay called "Exploring New Ethics for Survival", as well as an essay by Aldo Leopold in his A Sand County Almanac, called "The Land Ethic", in which Leopold explicitly claimed that the roots of the ecological crisis were philosophical (1949).
The first international academic journals in this field emerged from North America in the late 1970s and early 1980s – the US-based journal Environmental Ethics in 1979 and the Canadian-based journal The Trumpeter: Journal of Ecosophy in 1983. The first British based journal of this kind, Environmental Values, was launched in 1992.
Marshall's categories
Some scholars have tried to categorise the various ways the natural environment is valued. Alan Marshall and Michael Smith are two examples of this, as cited by Peter Vardy in The Puzzle of Ethics. According to Marshall, three general ethical approaches have emerged over the last 40 years: Libertarian Extension, the Ecologic Extension, and Conservation Ethics.
Libertarian extension
Marshall's libertarian extension echoes a civil liberty approach (i.e. a commitment to extending equal rights to all members of a community). In environmentalism, the community is generally thought to consist of non-humans as well as humans.
Andrew Brennan was an advocate of ecologic humanism (eco-humanism), the argument that all ontological entities, animate and inanimate, can be given ethical worth purely on the basis that they exist. The work of Arne Næss and his collaborator Sessions also falls under the libertarian extension, although they preferred the term "deep ecology". Deep ecology is the argument for the intrinsic value or inherent worth of the environment – the view that it is valuable in itself. Their argument falls under both the libertarian extension and the ecologic extension.
Peter Singer's work can be categorized under Marshall's 'libertarian extension'. He reasoned that the "expanding circle of moral worth" should be redrawn to include the rights of non-human animals, and to not do so would be guilty of speciesism. Singer found it difficult to accept the argument from intrinsic worth of a-biotic or "non-sentient" (non-conscious) entities, and concluded in his first edition of "Practical Ethics" that they should not be included in the expanding circle of moral worth. This approach is essentially then, bio-centric. However, in a later edition of Practical Ethics after the work of Næss and Sessions, Singer admits that, although unconvinced by deep ecology, the argument from intrinsic value of non-sentient entities is plausible, but at best problematic. Singer advocated a humanist ethics.
Ecologic extension
Alan Marshall's category of ecologic extension places emphasis not on
human rights but on the recognition of the fundamental interdependence
of all biological (and some abiological) entities and their essential diversity. Whereas Libertarian Extension can be thought of as flowing
from a political reflection of the natural world, ecologic extension is
best thought of as a scientific reflection of the natural world.
Ecological Extension is roughly the same classification of Smith's eco-holism, and it argues for the intrinsic value inherent in collective
ecological entities like ecosystems or the global environment as a whole entity. Holmes Rolston, among others, has taken this approach.
This category might include James Lovelock's Gaia hypothesis; the theory that the planet earth alters its geo-physiological structure over time in order to ensure the continuation of an equilibrium of evolving organic and inorganic matter. The planet is characterized as a unified, holistic entity with independent ethical value, compared to which the human race is of no particular significance in the long run.
Conservation ethics
Marshall's category of 'conservation ethics' is an extension of use-value into the non-human biological world. It focuses only on the worth of the environment in terms of its utility or usefulness to humans. It contrasts the intrinsic value ideas of 'deep ecology,' hence is often referred to as 'shallow ecology,' and generally argues for the preservation of the environment on the basis that it has extrinsic value – instrumental to the welfare of human beings. Conservation is therefore a means to an end and purely concerned with mankind and inter-generational considerations. It could be argued that it is this ethic that formed the underlying arguments proposed by Governments at the Kyoto summit in 1997 and three agreements reached in the Rio Earth Summit in 1992.
Humanist theories
Peter Singer advocated the preservation of "world heritage sites", unspoilt parts of the world that acquire a "scarcity value" as they diminish over time. Their preservation is a bequest for future generations as they have been inherited from human's ancestors and should be passed down to future generations so they can have the opportunity to decide whether to enjoy unspoilt countryside or an entirely urban landscape. A good example of a world heritage site would be the tropical rainforest, a very specialist ecosystem that has taken centuries to evolve. Clearing the rainforest for farmland often fails due to soil conditions, and once disturbed, can take thousands of years to regenerate.
Applied theology
The Christian world view sees the universe as created by God, and humankind accountable to God for the use of the resources entrusted to humankind. Ultimate values are seen in the light of being valuable to God. This applies both in breadth of scope – caring for people (Matthew 25) and environmental issues, e.g. environmental health (Deuteronomy 22.8; 23.12-14) – and dynamic motivation, the love of Christ controlling (2 Corinthians 5.14f) and dealing with the underlying spiritual disease of sin, which shows itself in selfishness and thoughtlessness. In many countries this relationship of accountability is symbolised at harvest thanksgiving. (B.T. Adeney : Global Ethics in New Dictionary of Christian Ethics and Pastoral Theology 1995 Leicester)
Abrahamic religious scholars have used theology to motivate the public. John L. O'Sullivan, who coined the term manifest destiny, and other influential people like him used Abrahamic ideologies to encourage action. These religious scholars, columnists and politicians historically have used these ideas and continue to do so to justify the consumptive tendencies of a young America around the time of the Industrial Revolution. In order to solidify the understanding that God had intended for humankind to use earths natural resources, environmental writers and religious scholars alike proclaimed that humans are separate from nature, on a higher order. Those that may critique this point of view may ask the same question that John Muir asks ironically in a section of his novel A Thousand Mile Walk to the Gulf, why are there so many dangers in the natural world in the form of poisonous plants, animals and natural disasters, The answer is that those creatures are a result of Adam and Eve's sins in the garden of Eden.
Since the turn of the 20th century, the application of theology in environmentalism diverged into two schools of thought. The first system of understanding holds religion as the basis of environmental stewardship. The second sees the use of theology as a means to rationalize the unmanaged consumptions of natural resources. Lynn White and Calvin DeWitt represent each side of this dichotomy.
John Muir personified nature as an inviting place away from the loudness of urban centers. "For Muir and the growing number of Americans who shared his views, Satan's home had become God's Own Temple." The use of Abrahamic religious allusions assisted Muir and the Sierra Club to create support for some of the first public nature preserves.
Authors like Terry Tempest Williams as well as John Muir build on the idea that "...God can be found wherever you are, especially outside. Family worship was not just relegated to Sunday in a chapel." References like these assist the general public to make a connection between paintings done at the Hudson River School, Ansel Adams' photographs, along with other types of media, and their religion or spirituality. Placing intrinsic value upon nature through theology is a fundamental idea of deep ecology.
Normative ethical theories
Normative ethics is a field in Moral Philosophy that investigates how one ought to act. What is morally right and wrong, and how moral standards are determined. Superficially, this approach may seem intrinsically anthropocentric. However, theoretical frameworks from traditional normative ethical theories are abundant within contemporary environmental ethics.
Consequentialism
Consequentialist theories focus on the consequences of actions, this emphasizes not what is 'right', but rather what is of 'value' and 'good'. Act Utilitarianism, for example, expands this formulation to emphasize that what makes an action right is whether it maximises well-being and reduces pain. Thus, actions that result in greater well-being are considered obligatory and permissible. It has been noted that this is an 'instrumentalist' position towards the environment, and as such not fully adequate to the delicate demands of ecological diversity.Rule-utilitarianism is the view that following certain rules without exception is the surest way to bring about the best consequences. This is an important update to act-utilitarianism because agents do not need to judge about the likely consequences of each act; all they must do is determine whether or not a proposed course of action falls under a specific rule and, if it does, act as the rule specifies.
Aldo Leopold's Land Ethic (1949) tries to avoid this type of instrumentalism by proposing a more holistic approach to the relationship between humans and their 'biotic community', so to create a 'limit' based on the maxim that "a thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community; it is wrong when it tends otherwise." Thus, the use of natural resources is permissible as long as it does not disrupt the stability of the ecosystem. Some philosophers have categorised Leopold's views to be within a consequentialist framework, however it is disputed whether this was intentional. Other consequentialist views such as that of Peter Singer tend to emphasis the inclusion of non-human sentient beings into ethical considerations. This view argues that all sentient creates which are by nature able to feel pleasure and pain, are of equal moral consideration for their intrinsic value. Nevertheless, non-sentient beings, such as plants, rivers and ecosystems, are considered to be merely instrumental.
Deontology
Deontological theories state that an action should be based on duties or obligations to what is right, instead of what is good. In strong contrast to consequentialism, this view argues for principles of duty based not on a function of value, but on reasons that make no substantive reference to the consequences of an action. Something of intrinsic value, then, has to be protected not because its goodness would maximise a wider good, but because it is valuable in itself; not as a means towards something, but as an end in itself. Thus, if the natural environment is categorised as intrinsically valuable, any destruction or damage to such would be considered wrong as a whole rather than merely due to a calculated loss of net value. It can be said that this approach is more holistic in principle than one of consequentialist nature, as it fits more adequately with the delicate balance of large ecosystems.
Theories of rights, for example, are generally deontological. That is, within this framework an environmental policy that gives rights to non-human sentient beings, would prioritise the conservation of such in their natural state, rather than in an artificial manner. Consider for example, issues in climate engineering; Ocean fertilisation aims to expand marine algae in order to remove higher levels of CO2. A complication from this approach is that it creates salient disruptions to local ecosystems. Furthermore, an environmental ethical theory based on the rights of marine animals in those ecosystems, would create a protection against this type of intervention. Environmental deontologists such as Paul W. Taylor, for example, have argued for a Kantian approach to issues of this kind. Taylor argues that all living things are 'teleological centres of life' deserving of rights and respect. His view uses a concept of 'universalizability', to argue that one ought to act only on actions which could be rationally willed as a universal law. Val Plumwood has criticised this approach by noting that the universalisation framework, is not necessarily based on 'respect' for the other, as it's based on duty and 'becoming' part of the environment.
Virtue ethics
Virtue ethics states that some character traits should be cultivated, and others avoided. This framework avoids problems of defining what is of intrinsic value, by instead arguing that what is important is to act in accordance with the correct character trait. The Golden mean formulation, for example, states that to be 'generous' (virtue), one should neither be miserly (deficiency) or extravagant (excess). Unlike deontology and consequentialism, theories of virtue focus their formulations on how the individual has to act to live a flourishing life. This presents a 'subjective flexibility' which seems like an adequate position to hold considering the fluctuating demands of sustainability. However, as a consequence, it can also be said that this is an inherently anthropocentric standpoint.
Some Ecofeminist theories such as that of Val Plumwood, have been categorised as a form of virtue ethics. Plumwood argues that a virtue-based ethical framework adapts more fittingly to environmental diversity, as virtues such as 'respect', 'gratitude', and 'sensitivity', are not only suitable to ecological subjectivity but also more applicable to the views of indigenous people. Furthermore, what traits would be considered as part of environmental vices? Ronald Sandler argues that detrimental dispositions to human flourishing such as 'greed', 'intemperance' and 'arrogance', lead to detrimental dispositions to the protection of the environment such as 'apathy', against other species, and 'pessimism' about conservation. Views such as this, create a mutualistic connection between virtuous human flourishing, and environmental flourishing.
Anthropocentrism
Anthropocentrism is the position that humans are the most important or critical element in any given situation; that the human race must always be its own primary concern. Detractors of anthropocentrism argue that the Western tradition biases homo sapiens when considering the environmental ethics of a situation and that humans evaluate their environment or other organisms in terms of the utility for them (see speciesism). Many argue that all environmental studies should include an assessment of the intrinsic value of non-human beings, which would entail a reassessment of humans ecocultural identities. In fact, based on this very assumption, a philosophical article has explored recently the possibility of humans' willing extinction as a gesture toward other beings. The authors refer to the idea as a thought experiment that should not be understood as a call for action.
Baruch Spinoza reasoned that if humans were to look at things objectively, they would discover that everything in the universe has a unique value. Likewise, it is possible that a human-centred or anthropocentric/androcentric ethic is not an accurate depiction of reality, and there is a bigger picture that humans may or may not be able to understand from a human perspective.
Peter Vardy distinguished between two types of anthropocentrism. A strong anthropocentric ethic argues that humans are at the center of reality and it is right for them to be so. Weak anthropocentrism, however, argues that reality can only be interpreted from a human point of view, thus humans have to be at the centre of reality as they see it.
Another point of view has been developed by Bryan Norton, who has become one of the essential actors of environmental ethics by launching environmental pragmatism, now one of its leading trends. Environmental pragmatism refuses to take a stance in disputes between defenders of anthropocentrist and non-anthropocentrist ethics. Instead, Norton distinguishes between strong anthropocentrism and weak-or-extended-anthropocentrism and argues that the former must underestimate the diversity of instrumental values humans may derive from the natural world.
A recent view relates anthropocentrism to the future of life. Biotic ethics are based on the human identity as part of gene/protein organic life whose effective purpose is self-propagation. This implies a human purpose to secure and propagate life. Humans are central because only they can secure life beyond the duration of the Sun, possibly for trillions of eons. Biotic ethics values life itself, as embodied in biological structures and processes. Humans are special because they can secure the future of life on cosmological scales. In particular, humans can continue sentient life that enjoys its existence, adding further motivation to propagate life. Humans can secure the future of life, and this future can give human existence a cosmic purpose.
Status of the field
Only after 1990 did the field gain institutional recognition as programs such as Colorado State University, the University of Montana, Bowling Green State University, and the University of North Texas. In 1991, Schumacher College of Dartington, England, was founded and now provides an MSc in Holistic Science.
These programs began to offer a master's degree with a specialty in environmental ethics/philosophy. Beginning in 2005 the Department of Philosophy and Religion Studies at the University of North Texas offered a PhD program with a concentration in environmental ethics/philosophy.
In Germany, the University of Greifswald has recently established an international program in Landscape Ecology & Nature Conservation with a strong focus on environmental ethics. In 2009, the University of Munich and Deutsches Museum founded the Rachel Carson Center for Environment and Society, an international, interdisciplinary center for research and education in the environmental humanities.
Relationship with animal ethics
Differing conceptions of the treatment of and obligations towards animals, particularly those living in the wild, within animal ethics and environmental ethics has been a source of controversy between the two ethical positions; some ethicists have asserted that the two positions are incompatible, while others have argued that these disagreements can be overcome.
See also
Anarcho-primitivism
Biocentrism
Bioethics
Climate ethics
Conservation movement
Crop art
Earth Economics (policy think tank)
Ecocentrism
Ecological economics
EcoQuest (a series of two educational games)
Environmental health ethics
Environmental movement
Environmental organization
Environmental politics
Environmental racism
Environmental resource management
Environmental skepticism
Environmental virtue ethics
Hans Jonas
Human ecology
List of environmental philosophers
Nature conservation
Population control
Resource depletion
Self-validating reduction
Solastalgia
Terraforming
Trail ethics
Van Rensselaer Potter
Veganism
Artificialization
Notes
Further reading
Brennan, Andrew/ Lo, Yeuk-Sze 2016: Environmental Ethics. In: Zalta, Edward N. (Hg.): The Stanford Encyclopedia of Philosophy (Winter 2016 Edition). https://plato.stanford.edu, Stanford University: https://plato.stanford.edu/archives/win2016/entries/ethics–environmental/.
Ott, Konrad (2020): Environmental ethics. In: Kirchhoff, Thomas (ed.): Online Encyclopedia Philosophy of Nature / Online Lexikon Naturphilosophie, doi: https://doi.org/10.11588/oepn.2020.0.71420; https://journals.ub.uni-heidelberg.de/index.php/oepn/article/view/71420.
External links
Bioethics Literature Database
Brief History of Environmental Ethics
Thesaurus Ethics in the Life Sciences
EnviroLink Library: Environmental Ethics - online resource for environmental ethics information
EnviroLink Forum - Environmental Ethics Discussion/Debate
Environmental Ethics (journal)
Sustainable and Ethical Architecture Architectural Firm
Stanford Encyclopedia of Philosophy
Environmental Ethics entry in the Internet Encyclopedia of Philosophy.
Center for Environmental Philosophy
UNT Dept of Philosophy
Creation Care Reading Room: Extensive online resources for environment and faith (Tyndale Seminary)
Category List - Religion-Online.org "Ecology/Environment"
Islam, Christianity and the Environment
Relational ethics
Articles containing video clips
Environmentalism
Environmental philosophy | Environmental ethics | Environmental_science | 4,532 |
47,677,275 | https://en.wikipedia.org/wiki/Steroid%20use%20in%20Australia | Anabolic/androgenic steroids are drugs that are obtained from the male hormone, testosterone. Anabolic steroids are used for muscle-building and strength gain for cosmetic reasons as well as for performance-enhancement in athletics and bodybuilding. Anabolic steroids work in many ways by increasing protein synthesis in the muscles and by eliminating the catabolic process (the process of breaking down skeletal muscle for energy). It is common for teens and adults to use steroids as they stimulate and encourage muscle growth much more rapidly than natural body building.
Statistics
In Australia, many people are encouraged to use steroids due to the body image expectations created by society. In secondary schools, 3.2% of boys and 1.2% of girls are using steroids. Many Australian bodybuilders visit Bangkok and Pattaya in Thailand because the pharmacies there sell some steroid brands ten times cheaper than they available on the Australian black market. Australians were also purchasing their steroids in other countries to avoid a possible criminal record at home. Australian Crime Commission statistics have shown that there was a 106% increase in the last financial year of "performance and image-enhancing-drugs", showing 5,561 border detections.
Notable events
In the first 3 months of 2008, 300 AAS seizures were reported by the Australian Customs and Border Protection Service.
See also
Drugs in sport in Australia
References
Drugs in Australia
Anabolic–androgenic steroids | Steroid use in Australia | Chemistry | 294 |
16,509,260 | https://en.wikipedia.org/wiki/Total%20effective%20dose%20equivalent | The Total effective dose equivalent (TEDE) is a radiation dosimetry quantity defined by the US Nuclear Regulatory Commission to monitor and control human exposure to ionizing radiation. It is defined differently in the NRC regulations and NRC glossary. According to the regulations, it is the sum of effective dose equivalent from external exposure and committed effective dose equivalent from internal exposure, thereby taking into account all known exposures. However, the NRC glossary defines it as the sum of the deep-dose equivalent and committed effective dose equivalent, which would appear to exclude the effective dose to the skin and eyes from non-penetrating radiation such as beta. These surface doses are included in the NRC's shallow dose equivalent, along with contributions from penetrating (gamma) radiation.
Regulatory limits are imposed on the TEDE for occupationally exposed individuals and members of the general public.
See also
Radioactivity
Radiation poisoning
Ionizing radiation
Deep-dose equivalent
Collective dose
Cumulative dose
Committed dose equivalent
Committed effective dose equivalent
References
10 CFR 20.1003
External links
- "The confusing world of radiation dosimetry" - M.A. Boyd, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems.
Radioactivity
Radiation health effects | Total effective dose equivalent | Physics,Chemistry,Materials_science | 258 |
70,628,115 | https://en.wikipedia.org/wiki/Biofoam | Biofoams are biological or biologically derived foams, making up lightweight and porous cellular solids. A relatively new term, its use in academia began in the 1980s in relation to the scum that formed on activated sludge plants.
Biofoams is a broad umbrella term that covers a large variety of topics including naturally occurring foams, as well as foams produced from biological materials such as soy oil and cellulose. Biofoams have been a topic of continuous research because synthesized biofoams are being considered as alternatives to traditional petroleum-based foams. Due to the variable nature of synthesized foams, they can have a variety of characteristics and material properties that make them suitable for packaging, insulation, and other applications.
Naturally occurring foams
Foams can form naturally within a variety of living organisms. For example, wood, cork, and plant matter all can have foam components or structures. Fungi are generally composed of mycelium, which is made up of hollow filaments of chitin nanofibers bound to other components. Animal parts like cancellous bone, horseshoe crab shells, toucan beaks, sponge, coral, feathers, and antlers all contain foam-like structures which decrease overall weight at the expense of other material properties.
Structures like bone, antlers, and shells have strong materials housing weaker but lighter materials within. Bones tend to have compact, dense external regions, which protect the internal foam-like cancelous bone. The same principle applies to horseshoe crab shells, toucan beaks, and antlers. The barbs and shafts of feathers similarly contain closed-cell foam.
Protective foams can be formed externally by parent organisms or by eggs interacting with the environment: tunicate egg mix with sea water to create a liquid-based foam; tree frog eggs grow in protein foams above and on water (see Figure 1); certain freshwater fish lay eggs in surface foam from their mucus; deep sea fish produce eggs in swimbladders of dual layered foams; and some insects keep their larvae in foam.
Biomimetic synthetic foams
Honeycomb
Honeycomb refers to bioinspired patterns that provide a lightweight design for energy absorbing structures. Honeycomb design can be found in different structural biological components such as spongy bone and plant vasculature. Biologically inspired honeycomb structures include Kelvin, Weaire and Floret honeycomb (see Figure 2); each with a slightly different structure in comparison to the natural hexagonal honeycomb. These variations on the biological design have yielded significantly improved energy absorption results in comparison to traditional hexagonal honeycomb biofoam.
Due to these increased energy absorption performances, honeycomb inspired structures are being researched for use inside vehicle crumple zones. By using honeycomb structures as the inner core and surrounding the structure with a more rigid structural shell, these components can absorb impact energy during a crash and reduce the amount of energy the driver experiences.
Aerogel
Aerogels are able to fill large volumes with minimal material yielding special properties such as low density and low thermal conductivity. These aerogels tend to have internal structures categorized as open or closed cell structures, the same cell structure that is used to define many 3-dimensional honeycomb biofoams. Aerogels are also being engineered to mirror the internal foam structures of animal hairs (see Figure 3). These biomimetic aerogels are being actively researched for their promising elastic and insulative properties.
Material properties
Foam cell structures
A foam is considered open-celled if at least two of its facets are holes rather than walls. In this case the entirety of the load on the foam is on the cross-beams that make up the edges of the cell. If no more than one of the walls of the cell are holes, the foam is considered closed-celled in nature. For most synthetic foams, a mixture of closed cell and open cell character is observed due to cells rupturing during the foaming process and then the matrix solidifying.
The mechanical properties of the foam then depend on the closed cell character of the foam as derived by Gibson and Ashby:
Where E is the elastic modulus, ρ is the density of the material, φ is the ratio of the volume of the face to the volume of the edge of the material, and the subscript s denotes the bulk property of the material rather than that of the foam sample.
Liquid and solid foams
For many polymeric foams, a solidified foam is formed by polymerizing and foaming a liquid polymer mixture and then allowing that foam to solidify. Thus, liquid foam aging effects do occur before solidification. In the liquid foam, gravitational forces and internal pressures cause a flow of the liquid toward the bottom of the foam. This causes some of the foam cells to form into irregular polyhedra as liquid drains, which are less stable structures than the spherical structures of a traditional foam. These structures can however be stabilized by the presence of a surfactant.
The foam structure before solidification is an inherently unstable one, as the voids present greatly increase the surface free energy of the structure. In some synthetic biofoams, a surfactant can be used in order to lower the surface free energy of the foam and therefore stabilize the foam. In some natural biofoams, proteins can act as the surfactants for the foams to form and stabilize.
Fiber reinforcement
During the solidification of synthetic biofoams, fibers may be added as a reinforcement agent for the matrix. This additionally will create a heterogeneous nucleation site for the air pockets of the foam itself during the foaming process. However, as fiber content increases, it can begin to inhibit formation of the cellular structure of the matrix.
Applications
Packaging
In relation to packaging, starches and biopolyesters make up these biofoams as they are adequate replacements to expanded polystyrene. Polylactic acids (PLAs) are a common form of the basis of these biofoams since they offer a substitute for polyolefin-based foams that are commonly used in automotive parts, pharmaceutical products, and short life-time disposable packaging industries due to their bio-based and biodegradable properties. PLA comes from the formation of lactide produced from lactic acid due to bacterial fermentation through ring-opening polymerization, in which the process is shown through Figure 4.
PLA does not have the most desirable traits for biodegradability in the packaging industry as it contains a low heat distortion temperature and has unfavorable water barrier characteristics. On the other hand, PLA has been shown to have desirable packaging properties including high ultraviolet light barrier properties, and low melting and glass transition temperatures. As of recently, PGA has been introduced in the packaging industry as it is a good solvent and comparable to PLA. Table 1 shows the characteristics of both biofoams and how they compare. As shown, PGA contains a strong stereochemistry structure which in turn causes it to have high barrier and mechanical properties making it desirable for the packaging industry. The study of mixing both PGA and PLA has been explored by using copolymerization in order for PGA to help enhance the barrier properties of PLA when used in packaging.
Table 1: The properties of PLA in comparison to PGA.
Biomedical
The most popular biofoam in the use of biomedical devices is PLA as well. PLA's properties are also desirable in biomedical applications, especially in combination with other polymers. Specifically, its biocompatibility and biodegradability make it favorable in tissue engineering through the use of FDM-3D printing. PLA does well in these printing environments as its glass transition temperature as well as shape memory is small. In recent studies, PLA has been specifically combined with hydroxyapatite (HA) in order to make the modulus of the sample more favorable for its application in repairing bone failure. Specifically in tissue engineering, HA has also been shown to generate osteogenesis by triggering osteoblasts and pre-osteoblastic cells. HA is a strong material, which makes it ideal to add to PLA, due to the fact that PLA has weak toughness with a 10% elongation before failure. FFF-based 3D printing was used as well as compression tests demonstrated in Figure 5. The results showed that there was a self-healing capability of the sample, which could be used in certain biomedical practices.
Environmental impact
With recent attention toward climate change, global warming, and sustainability, there has been a new wave of research regarding the creation and sustainability of biodegradable products. This research has evolved to include the creation of biodegradable biofoams, with the intention to replace other foams that may be environmentally harmful or whose production may be unsustainable. Following this vein, Gunawan et al. conducted research to developed “commercially-relevant polyurethane products that can biodegrade in the natural environment”. One such product includes flip-flops so as part of the research a flip-flop made from algae derived polyurethane was prototyped (see Figure 7). This research ultimately resulted in the conclusion that in both a compost and soil environment (different microorganisms present in each environment) significant degradation occurs in polyurethane foam formulated from algae oil.
Similarly, research has been done where algae oil (AO) and residual palm oil (RPO) have been formulated into foam polyurethane at different ratios to determine what ratio has the optimum biodegradability. RPO is recovered from the waste of palm oil mill and is a byproduct of that manufacturing process. After undergoing a tests to determine biodegradability as well as a thermogravimetric analysis, the team determined that the material could be utilized in applications such as insulation or fire retardants depending on the AO/RPO ratio.
Another focus of biofoam research is the development of biofoams that are not only biodegradable, but are also cost-effective and require less energy to produce. Luo et al. have conducted research in this area of biofoams and have ultimately developed a biofoam that is produced from a “higher content of nature bioresource materials” and using a “minimal [number of] processing steps”. The processing steps include the one-pot method of foam preparation published by F. Zhang and X. Luo in their paper about developing polyurethane biofoams as an alternative to petroleum based foams for specific applications.
Ongoing research
Research efforts have been put into using natural components in the creation of potentially biodegradable foam products. Mycelium (Figure 8), chitosan (Figure 9), wheat gluten (Figure 10), and cellulose (Figure 11) have all been used to create biofoams for different purposes. The wheat gluten example was used in combination with graphene to attempt to make a conductive biofoam. The mycelium-based, chitosan-based, and cellulose-based biofoam examples are intended to become cost effective and low density material options.
References
Foams | Biofoam | Chemistry | 2,298 |
21,100,715 | https://en.wikipedia.org/wiki/Krull%E2%80%93Schmidt%20theorem | In mathematics, the Krull–Schmidt theorem states that a group subjected to certain finiteness conditions on chains of subgroups, can be uniquely written as a finite direct product of indecomposable subgroups.
Definitions
We say that a group G satisfies the ascending chain condition (ACC) on subgroups if every sequence of subgroups of G:
is eventually constant, i.e., there exists N such that GN = GN+1 = GN+2 = ... . We say that G satisfies the ACC on normal subgroups if every such sequence of normal subgroups of G eventually becomes constant.
Likewise, one can define the descending chain condition on (normal) subgroups, by looking at all decreasing sequences of (normal) subgroups:
Clearly, all finite groups satisfy both ACC and DCC on subgroups. The infinite cyclic group satisfies ACC but not DCC, since (2) > (2)2 > (2)3 > ... is an infinite decreasing sequence of subgroups. On the other hand, the -torsion part of (the quasicyclic p-group) satisfies DCC but not ACC.
We say a group G is indecomposable if it cannot be written as a direct product of non-trivial subgroups G = H × K.
Statement
If is a group that satisfies both ACC and DCC on normal subgroups, then there is exactly one way of writing as a direct product of finitely many indecomposable subgroups of . Here, uniqueness means direct decompositions into indecomposable subgroups have the exchange property. That is: suppose is another expression of as a product of indecomposable subgroups. Then and there is a reindexing of the 's satisfying
and are isomorphic for each ;
for each .
Proof
Proving existence is relatively straightforward: let be the set of all normal subgroups that can not be written as a product of indecomposable subgroups. Moreover, any indecomposable subgroup is (trivially) the one-term direct product of itself, hence decomposable. If Krull-Schmidt fails, then contains ; so we may iteratively construct a descending series of direct factors; this contradicts the DCC. One can then invert the construction to show that all direct factors of appear in this way.
The proof of uniqueness, on the other hand, is quite long and requires a sequence of technical lemmas. For a complete exposition, see.
Remark
The theorem does not assert the existence of a non-trivial decomposition, but merely that any such two decompositions (if they exist) are the same.
Remak decomposition
A Remak decomposition, introduced by Robert Remak, is a decomposition of an abelian group or similar object into a finite direct sum of indecomposable objects. The Krull–Schmidt theorem gives conditions for a Remak decomposition to exist and for its factors to be unique.
Krull–Schmidt theorem for modules
If is a module that satisfies the ACC and DCC on submodules (that is, it is both Noetherian and Artinian or – equivalently – of finite length), then is a direct sum of indecomposable modules. Up to a permutation, the indecomposable components in such a direct sum are uniquely determined up to isomorphism.
In general, the theorem fails if one only assumes that the module is Noetherian or Artinian.
History
The present-day Krull–Schmidt theorem was first proved by Joseph Wedderburn (Ann. of Math (1909)), for finite groups, though he mentions some credit is due to an earlier study of G.A. Miller where direct products of abelian groups were considered. Wedderburn's theorem is stated as an exchange property between direct decompositions of maximum length. However, Wedderburn's proof makes no use of automorphisms.
The thesis of Robert Remak (1911) derived the same uniqueness result as Wedderburn but also proved (in modern terminology) that the group of central automorphisms acts transitively on the set of direct decompositions of maximum length of a finite group. From that stronger theorem Remak also proved various corollaries including that groups with a trivial center and perfect groups have a unique Remak decomposition.
Otto Schmidt (Sur les produits directs, S. M. F. Bull. 41 (1913), 161–164), simplified the main theorems of Remak to the 3 page predecessor to today's textbook proofs. His method improves Remak's use of idempotents to create the appropriate central automorphisms. Both Remak and Schmidt published subsequent proofs and corollaries to their theorems.
Wolfgang Krull (Über verallgemeinerte endliche Abelsche Gruppen, M. Z. 23 (1925) 161–196), returned to G.A. Miller's original problem of direct products of abelian groups by extending to abelian operator groups with ascending and descending chain conditions. This is most often stated in the language of modules. His proof observes that the idempotents used in the proofs of Remak and Schmidt can be restricted to module homomorphisms; the remaining details of the proof are largely unchanged.
O. Ore unified the proofs from various categories include finite groups, abelian operator groups, rings and algebras by proving the exchange theorem of Wedderburn holds for modular lattices with descending and ascending chain conditions. This proof makes no use of idempotents and does not reprove the transitivity of Remak's theorems.
Kurosh's The Theory of Groups and Zassenhaus' The Theory of Groups include the proofs of Schmidt and Ore under the name of Remak–Schmidt but acknowledge Wedderburn and Ore. Later texts use the title Krull–Schmidt (Hungerford's Algebra) and Krull–Schmidt–Azumaya (Curtis–Reiner). The name Krull–Schmidt is now popularly substituted for any theorem concerning uniqueness of direct products of maximum size. Some authors choose to call direct decompositions of maximum-size Remak decompositions to honor his contributions.
See also
Krull–Schmidt category
References
Further reading
A. Facchini: Module theory. Endomorphism rings and direct sum decompositions in some classes of modules. Progress in Mathematics, 167. Birkhäuser Verlag, Basel, 1998.
C.M. Ringel: Krull–Remak–Schmidt fails for Artinian modules over local rings. Algebr. Represent. Theory 4 (2001), no. 1, 77–86.
External links
Page at PlanetMath
Module theory
Theorems in group theory | Krull–Schmidt theorem | Mathematics | 1,433 |
45,251,913 | https://en.wikipedia.org/wiki/Dacrymyces%20ovisporus | Dacrymyces ovisporus is a species of fungus in the family Dacrymycetaceae. It was first described scientifically by German mycologist Julius Oscar Brefeld in 1888. The fungus produces roughly spherical to ovoid spores, and both one- and two-spored basidia.
References
External links
Dacrymycetes
Fungi described in 1888
Fungus species | Dacrymyces ovisporus | Biology | 82 |
40,014,784 | https://en.wikipedia.org/wiki/Microapartment | A microapartment, also known as a microflat, micro-condo, or micro-unit is a one-room, self-contained living space, usually purpose built, designed to accommodate a sitting space, sleeping space, bathroom and kitchenette with 14–32 square metres (150–350 sq ft).
Microapartments are becoming popular in urban centres in Europe, Japan, Hong Kong and North America, maximizing profits for developers and landlords and providing relatively low-priced accommodation.
Unlike a traditional studio flat, residents may also have access to a communal kitchen, communal bathroom/shower, patio and roof garden. The microapartments are often designed for futons, or with pull-down beds, folding desks and tables, and extra-small or hidden appliances. Microapartments also differ from bedsits, the traditional British bed-sitting room, in that they are self-contained, with their own bathroom, toilet, and kitchenette.
Regions
Canada
Units under 500 square feet are referred to as micro-units, and units under 200 square feet are referred to as nanounits. Development for such units expanded starting around 2015 in Toronto, while Vancouver bylaws limited the minimum for condo units to 398 square feet and rental units to 320 square feet.
Micro-units are typically inhabited by investors who are small entrepreneurs who own one or multiple units in markets like Toronto. Micro-condos are marketed more towards international students, newcomers, entry-level workers, empty-nesters, and those who do not want roommates any longer. For developers it is easier to make money on micro-units with higher returns, due to more of the general public being able to afford a smaller place and give other options to buyers besides 1 or 2 bedroom units. Other reasons developers have cited to build micro-units include lack of housing affordability, increases in the price of land, and the rising cost of construction. Micro-units often command a higher rent per square foot than larger unit sizes. Instead of more mixed-housing for long term tenants, micro-condos are often justified behind the idea that the city or condo amenities offer whatever else is needed. This does not include for those who work from home, or have other lifestyle needs at home. As a result of the lack of mixed-housing and intensification, fewer families are now staying in the city. Micro-condos are more common to be unsold inventory in times of downturn in the real estate market.
In Toronto, some micro-condos do not have appliances like a standalone oven, instead relying on convection microwaves. Other modifications include stovetops with only two burners and smaller sinks. A decline in cooking with the rise of food delivery apps have partially encouraged the reason for the change of appliances to be modified or removed. Some designs to maximize space include drop down Murphy beds, stacked laundry, floating desks, slide out shelves. These designs allow the developer to save thousands of dollars in developing cost. Such units are also aimed at short term rental options such as for Airbnb users.
Hong Kong
Gary Chang, an architect in Hong Kong, has designed a large 32-square-metre (344 sq ft) microapartment with sliding walls attached to tracks on the ceiling. By moving the walls around, and using built-in folding furniture and worktops, he can convert the space into 24 different rooms, including a kitchen, library, laundry room, dining room, bar and video-game room.
In Hong Kong, developers are embracing the micro-living trend, renting microapartments at sky-high prices. The Wall Street Journal compares the 180-square-feet flat in High Place, Sai Ying Pun to the size of a U.S. parking spot (160 square feet) in a video, highlighting the soaring property prices in Hong Kong (one of the apartments in High Place was sold for more than US$500,000 in June 2015).
Italy
In Rome, where the average price of property in 2010 was $7,800 per square metre ($725 per square foot), microapartments as small as 4 square metres (45 square feet) have been advertised.
United States
In the United States, most cities have zoning codes that set the minimum size for a housing unit (often 400 square feet) as well as the number of non-related persons who can live together in one unit. Tiny apartments began as a coastal city trend but they are spreading to the Midwestern United States.
New York City
Micro apartments have been around in New York City since at least the 19th century. The average size of a New York City tenement unit back then was around 284 square feet, and four or more people would cram into that tiny space. In June 2016, New York City got its first microapartment building, Carmel Place, with 55 units that are as small as and ceilings from . Common's Williamsburg in Brooklyn rents single rooms where tenants share a kitchen for $2,050 per month; The Guardian states that "[s]ingle room occupancy housing is obviously not a new concept, however, the genius of late capitalism is that it has made it desirable" to high-income renters".
California
In 2017, California passed a law that encourages development of "efficiency units" of at least 150 sq ft by disallowing localities from limiting their numbers near public universities and public transportation.
In San Francisco, Starcity is converting unused parking garages, commercial spaces and offices into single room residential units, where tenants (tech professionals are the typical renter) get a furnished bedroom and access to wifi, janitor services and common kitchens and lounges for $1,400 to $2,400 per month, an approach that has been called "dorm living for grown ups".
In 2018, newly built one-room rentals in San Francisco at the Starcity development, aimed at high-income tenants, were referred to as single room occupancy rooms "by another name".
Boston
Boston's first microapartment building opened in August 2016, on Commonwealth Avenue in Packard's Corner. As the largest microapartment building in the United States, the building is currently being leased by Boston University to house 341 students during the renovation of another university residence. The building contains 180 units that each contain a bathroom with stand-up shower; a kitchen with all stainless-steel appliances that include an oven, a microwave, a dishwasher, and a refrigerator. Each unit also includes a stand up washer-dryer unit. Other amenities include an optional parking garage and indoor bike room in the basement, currently unused retail space, a lounge space, a rooftop penthouse, a deck overlooking the Allston neighbors, and an entertainment room that will be converted to a fitness center at the end of the University's tenure at the property, which is anticipated to be in 2018.
Seattle
There has been a backlash in some cities against the increasing number of microapartments. In Seattle, some residents have complained that high-density microhousing changes the character of neighborhoods, suddenly increasing demand for parking spaces and other amenities. From 2009 to 2014, Seattle had a big increase in the building and creation of new single room occupancy (SRO) units designed to be rented at market rates, which had an average monthly rent of $660; In 2013, for example, 1,800 SRO units and microapartment units were built. In 2018, the media depicted the increasing popularity of micro apartments as a new trend; however, an article about Seattle in Market Urbanism Report states this is a "reenactment of the way U.S. cities have long worked", as individuals seeking "solo living and centralized locations" are willing to accept smaller apartments even though the per-square-foot prices may be higher than some larger units. The report states that 2018-era micro apartments were known as SROs in the early 20th century, and they housed "rich and poor alike" (although the rich lived in live-in luxury hotels and the poor lived in "bunkhouses for day laborers"). Neighborhood groups in Seattle have criticized new micro apartment SRO units, arguing that they "harmed community character and provided...inhumane living conditions"; the city passed regulations that outlawed micro apartment/SRO construction.
United Kingdom
In the UK, property developers are using office-to-residential permitted development rights, a policy introduced in 2013, to transform old office buildings into microapartment developments. The nationally described space standard stipulates that new homes in the UK cannot be smaller than 37sqm; however, this does not apply to conversions. London-based developer Inspired Homes has taken advantage of office-to-residential permitted development rights to deliver over 400 microapartments. A micro-property in the UK has no strict definition but typically refers to properties with a floor area below 37sqm. Which? magazine reported that almost 8,000 new micro-homes were built in 2016, the highest number on record.
As of 2017, the largest microapartment building in the world is The Collective Old Oak, which opened in London on May 1, 2016. Designed by PLP Architecture, the development has 546 rooms with most units grouped into "twodios" – two en-suite bedrooms that share a small kitchenette. There are also some private suites. The units sizes range from for an ensuite rooms with a shared kitchenette, to for a shared ensuite and shared ensuite with kitchenette. Each floor features one larger kitchen with a dining table, which is shared between 30 and 70 residents, and themed communal living spaces such as a games room, a cinema, a 'disco-launderette', a hidden garden and a spa. A restaurant, gym and co-working spaces are located in the lower floors of the building.
Criticism
Although some prefer to live in microapartments, others only temporarily live in microapartments due to economic reasons, and would move to a larger house or apartment if they could afford to do so.
The quality of living in microapartments has been called into question due to a lack of space.
Susan Saegert, a professor of environmental psychology at the CUNY Graduate Center stated that "I've studied children in crowded apartments and low-income housing a lot," Saegert said, "and they can end up becoming withdrawn, and have trouble studying and concentrating." The small size of a microapartment can be an issue with some tenants, as its confined nature may permit strong odors to linger.
Samuel D. Gosling states that "an apartment has to fill other psychological needs as well, such as self-expression and relaxation, that might not be as easily met in a highly cramped space". In micro-apartments occupied by multiple people privacy can be an issue.
See also
Bedsit
Capsule hotel
List of house types
Minimalist architecture and space
One-room mansion
Pied-à-terre
Single room occupancy
Rooming house
Notes
Further reading
Apartment types
House types
Housing in the United Kingdom
Housing
Urban design
Urban planning
Affordable housing
Living arrangements
Intentional communities
Alternative housing | Microapartment | Engineering | 2,300 |
2,800,098 | https://en.wikipedia.org/wiki/Ralph%20Johnson%20%28computer%20scientist%29 | Ralph E. Johnson is a Research Associate Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. He is a co-author of the influential computer science textbook Design Patterns: Elements of Reusable Object-Oriented Software, for which he won the 2010 ACM SIGSOFT Outstanding Research Award. In 2006 he was awarded the Dahl–Nygaard Prize for his contributions to the state of the art embodied in that book as well.
Johnson was an early pioneer in the Smalltalk community and is a continued supporter of the language. He has held several executive roles at the ACM Object-Oriented Programming, Systems, Languages and Applications conference OOPSLA. He initiated the popular OOPSLA Design Fest workshop.
References
External links
Ralph Johnson's blog
Ralph E. Johnson at UIUC
Interview with Ralph Johnson from OOPSLA 2009, discussing Parallel Programming Patterns
Presentation on a Pattern Language for Parallel Programming from QCon London 2010
American computer scientists
Living people
Scientists from Illinois
University of Illinois Urbana-Champaign faculty
Dahl–Nygaard Prize
Year of birth missing (living people) | Ralph Johnson (computer scientist) | Technology | 219 |
76,423,933 | https://en.wikipedia.org/wiki/Janzen%E2%80%93Rayleigh%20expansion | In fluid dynamics, Janzen–Rayleigh expansion represents a regular perturbation expansion using the relevant mach number as the small parameter of expansion for the velocity field that possess slight compressibility effects. The expansion was first studied by O. Janzen in 1913 and Lord Rayleigh in 1916.
Steady potential flow
Consider a steady potential flow that is characterized by the velocity potential Then satisfies
where , the sound speed is expressed as a function of the velocity magnitude For a polytropic gas, we can write
where is the specific heat ratio, is the stagnation sound speed (i.e., the sound speed in a gas at rest) and is the stagnation enthalpy. Let be the characteristic velocity scale and is the characteristic value of the sound speed, then the function is of the form
where is the relevant Mach number.
For small Mach numbers, we can introduce the series
Substituting this governing equation and collecting terms of different orders of leads to a set of equations. These are
and so on. Note that is independent of with which the latter quantity appears in the problem for .
Imai–Lamla method
A simple method for finding the particular integral for in two dimensions was devised by Isao Imai and Ernst Lamla. In two dimensions, the problem can be handled using complex analysis by introducing the complex potential formally regarded as the function of and its conjugate ; here is the stream function, defined such that
where is some reference value for the density. The perturbation series of is given by
where is an analytic function since and , being solutions of the Laplace equation, are harmonic functions. The integral for the first-order problem leads to the Imai–Lamla formula
where is the homogeneous solution (an analytic function), that can be used to satisfy necessary boundary conditions. The series for the complex velocity potential is given by
where and
References
Fluid dynamics | Janzen–Rayleigh expansion | Chemistry,Engineering | 383 |
48,498,000 | https://en.wikipedia.org/wiki/Chinese%20language%20card | A Chinese language card or Chinese character card is a computer expansion card that improves the ability of computers to process Chinese text.
Early computers were limited in processing speed and storage capacity. If a software like CC-DOS or :zh:UCDOS is being used to render Chinese characters, the Chinese font could take up to 1/3 of RAM, making it impossible to execute large programs. Moreover, Chinese rendering via software must go through the BIOS, so that the speed is as slow as dozens of characters per second.
Using a Chinese Character card could improve the computer's ability to process Chinese text. The card has a Chinese font burnt on its ROM chip, so that the font no longer takes up computer RAM. It takes internal codes and directly renders the corresponding characters onto the screen, which works much faster than software-based rendering.
Manufacturers
At the beginning of the 1990s, Lenovo (Legend Group), Founder Group, Giant Corporation, and E-TEN were manufacturing Chinese Character cards.
Decline
As computer hardware improved, Chinese character cards were gradually rendered obsolete by software. Currently very few computers use Chinese Character cards to handle Chinese text. After Microsoft began supporting Chinese Language in MS-DOS and Microsoft Windows, Chinese Character cards were essentially rendered obsolete.
See also
Han card
References
Computer peripherals
Chinese-language computing
Legacy hardware | Chinese language card | Technology | 268 |
29,623,466 | https://en.wikipedia.org/wiki/Microwave%20digestion | Microwave digestion is a chemical technique used to decompose sample material into a solution suitable for quantitative elemental analysis. It is commonly used to prepare samples for analysis using inductively coupled plasma mass spectrometry (ICP-MS), atomic absorption spectroscopy, and atomic emission spectroscopy (including ICP-AES).
To perform the digestion, sample material is combined with a concentrated strong acid or a mixture thereof, most commonly using nitric acid, hydrochloric acid and/or hydrofluoric acid, in a closed PTFE vessel. The vessel and its contents are then exposed to microwave irradiation, raising the pressure and temperature of the solution mixture. The elevated pressures and temperatures within a low pH sample medium increase both the speed of thermal decomposition of the sample and the solubility of elements in solution. Organic compounds are decomposed into gaseous form, effectively removing them from solution. Once these elements are in solution, it is possible to quantify elemental concentrations within samples.
Microwaves can be programmed to reach specific temperatures or ramp up to a given temperature at a specified rate. The temperature in the interior of the vessel is monitored by an infrared external sensor or by a optic fiber probe, and the microwave power is regulated to maintain the temperature defined by the active program. The vessel solution must contain at least one solvent that absorbs microwave radiation, usually water. The specific blend of acids (or other reagents) and the temperatures vary depending upon the type of sample being digested. Often a standardized protocol for digestion is followed, such as an Environmental Protection Agency Method.
Comparison between microwave digestion and other sample preparation methods
Before microwave digestion technology was developed, samples were digested using less convenient methods, such as heating vessels in an oven, typically for at least 24 hours. The use of microwave energy allows for fast sample heating, reducing digestion time to as little as one hour.
Another common means to decompose samples for elemental analysis is dry-ashing, in which samples are incinerated in a muffle furnace. The resultant ash is then dissolved for analysis, usually into dilute nitric acid. While this method is simple, inexpensive and does not require concentrated acids, it cannot be used for volatile elements such as mercury and can increase the likelihood of background contamination. The incineration will not convert all elements to soluble salts, necessitating an additional digestion step.
Quality control in microwave digestion
In microwave digestion, 100% analyte recovery cannot be assumed. To account for this, scientists perform tests such as fortification recovery, in which a spike (a known amount of the target analyte) is added to test samples. These spiked samples are then analyzed to determine whether the expected increase in analyte concentration occurs.
Contamination from improperly cleaned digestion vessels is also a possibility. As such, in any microwave digestion, blank samples need to be digested to determine if there is background contamination.
References
Footnotes
Bibliography
Analytical chemistry | Microwave digestion | Chemistry | 608 |
57,094,192 | https://en.wikipedia.org/wiki/Kastus | Kastus Technologies is an Irish multinational nanotechnology company that specialises in patented, visible-light-activated, photocatalytic, antimicrobial coatings. These coatings prevent the growth of bacteria on surfaces such as ceramics, glass, and touchscreens, with no negative side effects for the end user. Founded in Dublin in 2014, Kastus’ antimicrobial coatings were in development for over 10 years as part of a collaboration with Dublin Institute of Technology and the Advanced Materials and Bio Engineering Research (AMBER) Centres.
History
John Browne, Kastus CEO, founded the company in 2014 in Dublin following 10 years of collaborative research with Dublin Institute of Technology. It was developed out of an increasing demand for a reduction in the spread of antibiotic-resistant infections commonly found on indoor surfaces. In October 2017, the Department of Health published “Ireland’s National Action Plan on Antimicrobial Resistance 2017-2020”, which highlighted the threat antimicrobial resistance poses and the urgent need for new technology to combat this.
In April 2016, the Sligo Institute of Technology, which is funded by Kastus, announced the creation of a non-toxic antimicrobial nanotechnology, which Kastus plans to market globally. This research is supported by a €1.5 million funding investment from Atlantic Bridge.
In 2018, Kastus partnered with Oman-based ceramic tile producer Al Maha Ceramics, which exports to 15 countries in Asia and Africa. The deal saw Kastus use its antimicrobial technology to produce a range of new tiles called iProtect.
In 2019, Kastus partnered with Faytech to enhance their development of touch display manufacturing capabilities.
In 2020, Kastus developed antimicrobial and antiviral technology used on touch screens to prevent the spread of diseases such as COVID-19. The screen technology has been shown to kill up to 99% of harmful bacteria, fungi, and antibiotic-resistant superbugs, including human coronavirus. Kastus was awarded EU funding to further develop and expand these technologies and their applications, and has partnered with companies such as Lenovo, Zagg, and Lavazza for a range of commercial applications for their products.
In 2021, Kastus raised €5.65 million in a Series A round to build out its global commercial team to meet growing demand for its antiviral surface protection technology.
Awards
Spin-out Company Impact Award (2017)
Irish Times Innovation of the Year award (2017)
Irish Times Life Sciences and Healthcare award (2017)
KTI Impact Award Winners 2017
Med Tech Award Finalists 2020
EY Entrepreneur of The Year Finalists
References
Nanotechnology companies
Companies based in Dublin (city) | Kastus | Materials_science | 557 |
49,615,253 | https://en.wikipedia.org/wiki/5D%20optical%20data%20storage | 5D optical data storage (also branded as Superman memory crystal, a reference to the Kryptonian memory crystals from the Superman franchise) is an experimental nanostructured glass for permanently recording digital data using a femtosecond laser writing process. Discs using this technology could be capable of storing up to worth of data (at the largest size, 12 cm discs) for billions of years. The concept was experimentally demonstrated in 2013. Hitachi and Microsoft have researched glass-based optical storage techniques, the latter under the name Project Silica.
The "5-dimensional" descriptor is because, unlike marking only on the surface of a 2D piece of paper or magnetic tape, this method of encoding uses two optical dimensions and three spatial co-ordinates to write throughout the material, which suggested the name '5D data crystal'. No exotic higher dimensional properties are involved. The size, orientation and three-dimensional position of the nanostructures comprise the so-called five dimensions.
Technical design
The concept is to store data optically in non-photosensitive transparent materials such as fused quartz, which has high chemical stability. Recording data using a femtosecond-laser was first proposed and demonstrated in 1996. The storage medium consists of fused quartz, where the spatial dimensions, intensity, polarization, and wavelength are used to modulate data. By introducing gold or silver nanoparticles embedded in the material, their plasmonic properties can be exploited.
According to the University of Southampton:
Recorded data can be read with a combination of an optical microscope and a polarizer.
The technique was first demonstrated in 2009 by researchers at the Swinburne University of Technology and in 2010 by Kazuyuki Hirao's laboratory at the Kyoto University, and developed further by Peter Kazansky's research group at the Optoelectronics Research Centre, University of Southampton. Discs recorded from that time have been tested for 3100 hours at 100°C and shown to still work "perfectly" ten years later.
Uses
In 2018, Professor Peter Kazansky used the technology to store a copy of Isaac Asimov's Foundation trilogy, which was launched into space aboard Elon Musk's Tesla Roadster in association with the Arch Mission Foundation.
In 2024, Kazansky's group encoded the three billion character human genome and etched it onto a coin-sized 5D disc. It includes a visual key explaining how to use it, in homage to the Pioneer plaques that were placed on board the 1972 Pioneer 10 and 1973 Pioneer 11 spacecrafts. It is stored in the Memory of Mankind archive, located in the world's oldest salt mine in Hallstatt, Austria.
See also
References
External links
Marketing website of the Southampton research team
Big data
Solid-state computer storage media
Non-volatile memory
Digital preservation | 5D optical data storage | Technology | 578 |
23,612,565 | https://en.wikipedia.org/wiki/Journal%20of%20Chemical%20Technology%20%26%20Biotechnology | The Journal of Chemical Technology & Biotechnology is a monthly peer-reviewed scientific journal. It was established in 1882 as the Journal of the Society of Chemical Industry by the Society of Chemical Industry (SCI). In 1950 it changed its title to Journal of Applied Chemistry and volume numbering restarted at 1. In 1971 the journal changed its title to Journal of Applied Chemistry and Biotechnology and in 1983 it obtained the current title. It covers chemical and biological technology relevant for economically and environmentally sustainable industrial processes. The journal is published by John Wiley & Sons on behalf of SCI.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.174.
References
External links
Biotechnology journals
Biotechnology in the United Kingdom
Chemistry journals
English-language journals
Wiley (publisher) academic journals
Monthly journals
Publications established in 1882 | Journal of Chemical Technology & Biotechnology | Biology | 173 |
1,473,823 | https://en.wikipedia.org/wiki/Medical%20practice%20management%20software | Medical practice management software (PMS) is a category of healthcare software that deals with the day-to-day operations of a medical practice including veterinarians. Such software frequently allows users to capture patient demographics, schedule appointments, maintain lists of insurance payors, perform billing tasks, and generate reports.
In the United States, most PMS systems are designed for small to medium-sized medical offices. Some of the software is designed for or used by third-party medical billing companies. PMS is often divided among desktop-only software, client-server software, or Internet-based software.
The desktop-only variety is intended to be used only on one computer by one or a handful of users sharing access. Client-server software typically necessitates that the practice acquire or lease server equipment and operate the server software on that hardware, while individual users' workstations contain client software that accesses the server. Client-server software's advantage is in allowing multiple users to share the data and the workload; a major disadvantage is the cost of running the server. Internet-based software is a relatively newer breed of PMS. Such software decreases the need for the practice to run their own server and worry about security and reliability. However, such software removes patient data from the practice's premises, which can be seen as a security risk of its own.
PMS is often connected to electronic medical records (EMR) systems. While some information in a PMS and an EMR overlaps — for example, patient and provider data — in general the EMR system is used for the assisting the practice with clinical matters, while PMS is used for administrative and financial matters. Medical practices often hire different vendors to provide the EMR and PMS systems. The integration of the EMR and PMS software is considered one of the most challenging aspects of the medical practice management software implementation.
Components of practice management software
Most practice management software contains systems that allow users to enter and track patients, schedule and track patient appointments, send out insurance claims and patient statements as part of the collection process, process insurance, patient and third party payments, and generate reports for the administrative and clinical staff of the practice. Typically, using a PMS also involves keeping up to date large sets of data including lists of diagnosis and procedures, lists of insurance companies, referring physicians, providers, facilities, and much more.
Appointment scheduling
Practice management systems often include a calendaring or scheduling component that allows staff to create and track upcoming patient visits. Software is often differentiated by whether it allows double-booking, or whether it uses scheduling or a booking model. Schedules are often color-coded to allow healthcare providers (i.e. doctors, nurses, assistants) to easily identify blocks of time or sets of patients.
Claims and statements
If the patient carried a valid private or public insurance policy at the time these services were provided, the charges are then sent out as an insurance claim. The process of sending charges may happen on paper, usually with the use of the CMS-1500 form. This form lists the provider who performed the service, the patient, the services performed and the related diagnoses. For institutional (typically hospital) charges, claims may also be sent out on the UB-04 forms (formerly the UB-92 which use of was discontinued in 2007). Claims may also be sent out electronically using industry-standard electronic data interchange standards.
In most cases, electronic claims are submitted using an automated software process. Some practice management system vendors will update CPT/ICD-10 codes in the Practice Software on an annual basis. Some, especially smaller firms, leave it entirely up to medical practices. While a lot of insurance payers have created methods for direct submission of electronic claims, many software vendors or practice users use the services of an electronic claim clearinghouse to submit their claims. Such clearinghouses commonly maintain connections to a large number of payers and make it easy for practices to submit claims to any of these payers. Instead of creating a connection to every payer, the practice user or software vendor must only connect to the clearinghouse.
Once a claim is adjudicated by the payer, some sort of response is sent to the submitter. This usually comes as a paper Explanation of Benefits (EOB) or an Electronic Remittance Advice (ERA). These describe the actions that the payer took on each claim: amounts paid, denied, adjusted, etc.
In cases where a patient did not have proper insurance, or where insurance coverage did not fully pay the charges, the practice will usually send out patient statements. Practice management software often contains ways for a practice to print and mail their own statements (or other correspondence), and may even contain a way to interface to third-party patient statement printing companies.
Reporting
Almost invariably, the process of running a medical practice requires some introspection, and practice management software usually contains reporting capabilities to allow users to extract detailed data on financial performance and patient financial histories. PMS often has both pre-setup reports and allows users to design their own, ad-hoc reports.
In some cases, the reporting functionality of PMS interfaces with decision support systems or has similar functionality built-in.
Practice management software and commerce
The global veterinary PMS industry size was estimated to be 323 million in 2016 with more than 120 million from United States. Veterinary PMS is expected to be growing at the rate of 8.9% per year. There are more than 20 different software available in the market for Veterinary PMS.
Practice management software (PMS) has traditionally been commercial; few viable free practice management systems exist, though a few open source systems are under development. PMS usually costs about $100 to tens of thousands of dollars to license and operate.
PMS often needs to interface with the outside world. There are a number of standards that are used:
HL7 — used to communicate with hospitals, or EHR systems
ANSI X12 EDI transactions, including:
270 — eligibility & benefit inquiry - Is the patient an insured of this payer?
271 — eligibility & benefit response (response to 270) - A yes or no response that the patient is insured.
276 — claims status inquiry (follows 837 submissions)
277 — claim status response (response to 276)
835 — claim payment/advice (follows 837) - 837 medical claims is paid, and amount of payment and the patient's financial responsibility
837D — claim submission for dental claims
837I — claim submission for institutional claims
837P — claim submission for professional claims
See also
List of open source healthcare software
Electronic health record
Health informatics
Medical record
Evaluation and Management Coding
References
External links
Certification Commission for Health Information Technology - a nonprofit organization that evaluates and certifies healthcare technology, including medical software
Health informatics
Health care software | Medical practice management software | Biology | 1,392 |
63,944,266 | https://en.wikipedia.org/wiki/Differentiable%20vector%E2%80%93valued%20functions%20from%20Euclidean%20space | In the mathematical discipline of functional analysis, a differentiable vector-valued function from Euclidean space is a differentiable function valued in a topological vector space (TVS) whose domains is a subset of some finite-dimensional Euclidean space.
It is possible to generalize the notion of derivative to functions whose domain and codomain are subsets of arbitrary topological vector spaces (TVSs) in multiple ways.
But when the domain of a TVS-valued function is a subset of a finite-dimensional Euclidean space then many of these notions become logically equivalent resulting in a much more limited number of generalizations of the derivative and additionally, differentiability is also more well-behaved compared to the general case.
This article presents the theory of -times continuously differentiable functions on an open subset of Euclidean space (), which is an important special case of differentiation between arbitrary TVSs.
This importance stems partially from the fact that every finite-dimensional vector subspace of a Hausdorff topological vector space is TVS isomorphic to Euclidean space so that, for example, this special case can be applied to any function whose domain is an arbitrary Hausdorff TVS by restricting it to finite-dimensional vector subspaces.
All vector spaces will be assumed to be over the field where is either the real numbers or the complex numbers
Continuously differentiable vector-valued functions
A map which may also be denoted by between two topological spaces is said to be or if it is continuous. A topological embedding may also be called a .
Curves
Differentiable curves are an important special case of differentiable vector-valued (i.e. TVS-valued) functions which, in particular, are used in the definition of the Gateaux derivative. They are fundamental to the analysis of maps between two arbitrary topological vector spaces and so also to the analysis of TVS-valued maps from Euclidean spaces, which is the focus of this article.
A continuous map from a subset that is valued in a topological vector space is said to be ( or ) if for all it is which by definition means the following limit in exists:
where in order for this limit to even be well-defined, must be an accumulation point of
If is differentiable then it is said to be or if its , which is the induced map is continuous.
Using induction on the map is or if its derivative is continuously differentiable, in which case the is the map
It is called , or if it is -times continuously differentiable for every integer
For it is called if it is -times continuous differentiable and is differentiable.
A continuous function from a non-empty and non-degenerate interval into a topological space is called a or a in
A in is a curve in whose domain is compact while an or in is a path in that is also a topological embedding.
For any a curve valued in a topological vector space is called a if it is a topological embedding and a curve such that for every where it is called a if it is also a path (or equivalently, also a -arc) in addition to being a -embedding.
Differentiability on Euclidean space
The definition given above for curves are now extended from functions valued defined on subsets of to functions defined on open subsets of finite-dimensional Euclidean spaces.
Throughout, let be an open subset of where is an integer.
Suppose and is a function such that with an accumulation point of Then is if there exist vectors in called the , such that
where
If is differentiable at a point then it is continuous at that point.
If is differentiable at every point in some subset of its domain then is said to be ( or ) , where if the subset is not mentioned then this means that it is differentiable at every point in its domain.
If is differentiable and if each of its partial derivatives is a continuous function then is said to be ( or ) or
For having defined what it means for a function to be (or times continuously differentiable), say that is or that if is continuously differentiable and each of its partial derivatives is
Say that is , or if is for all
The of a function is the closure (taken in its domain ) of the set
Spaces of Ck vector-valued functions
In this section, the space of smooth test functions and its canonical LF-topology are generalized to functions valued in general complete Hausdorff locally convex topological vector spaces (TVSs). After this task is completed, it is revealed that the topological vector space that was constructed could (up to TVS-isomorphism) have instead been defined simply as the completed injective tensor product of the usual space of smooth test functions with
Throughout, let be a Hausdorff topological vector space (TVS), let and let be either:
an open subset of where is an integer, or else
a locally compact topological space, in which case can only be
Space of Ck functions
For any let denote the vector space of all -valued maps defined on and let denote the vector subspace of consisting of all maps in that have compact support.
Let denote and denote
Give the topology of uniform convergence of the functions together with their derivatives of order on the compact subsets of
Suppose is a sequence of relatively compact open subsets of whose union is and that satisfy for all
Suppose that is a basis of neighborhoods of the origin in Then for any integer the sets:
form a basis of neighborhoods of the origin for as and vary in all possible ways.
If is a countable union of compact subsets and is a Fréchet space, then so is
Note that is convex whenever is convex.
If is metrizable (resp. complete, locally convex, Hausdorff) then so is
If is a basis of continuous seminorms for then a basis of continuous seminorms on is:
as and vary in all possible ways.
Space of Ck functions with support in a compact subset
The definition of the topology of the space of test functions is now duplicated and generalized.
For any compact subset denote the set of all in whose support lies in (in particular, if then the domain of is rather than ) and give it the subspace topology induced by
If is a compact space and is a Banach space, then becomes a Banach space normed by
Let denote
For any two compact subsets the inclusion
is an embedding of TVSs and that the union of all as varies over the compact subsets of is
Space of compactly support Ck functions
For any compact subset let
denote the inclusion map and endow with the strongest topology making all continuous, which is known as the final topology induced by these map.
The spaces and maps form a direct system (directed by the compact subsets of ) whose limit in the category of TVSs is together with the injections
The spaces and maps also form a direct system (directed by the total order ) whose limit in the category of TVSs is together with the injections
Each embedding is an embedding of TVSs.
A subset of is a neighborhood of the origin in if and only if is a neighborhood of the origin in for every compact
This direct limit topology (i.e. the final topology) on is known as the .
If is a Hausdorff locally convex space, is a TVS, and is a linear map, then is continuous if and only if for all compact the restriction of to is continuous. The statement remains true if "all compact " is replaced with "all ".
Properties
Identification as a tensor product
Suppose henceforth that is Hausdorff.
Given a function and a vector let denote the map defined by
This defines a bilinear map into the space of functions whose image is contained in a finite-dimensional vector subspace of
this bilinear map turns this subspace into a tensor product of and which we will denote by
Furthermore, if denotes the vector subspace of consisting of all functions with compact support, then is a tensor product of and
If is locally compact then is dense in while if is an open subset of then is dense in
See also
Notes
Citations
References
Banach spaces
Differential calculus
Euclidean geometry
Functions and mappings
Generalizations of the derivative
Topological vector spaces | Differentiable vector–valued functions from Euclidean space | Mathematics | 1,649 |
34,317,128 | https://en.wikipedia.org/wiki/The%20Island%20on%20Bird%20Street%20%28film%29 | The Island on Bird Street () is a 1997 Danish produced drama film directed by Søren Kragh-Jacobsen. It is based on the novel The Island on Bird Street.
Cast
Patrick Bergin as Stefan
Jordan Kiziuk as Alex
Jack Warden as Boruch
James Bolam as Doctor Studjinsky
Stefan Sauk as Goehler
Simon Gregor as Henryk
Lee Ross as Freddy
Suzanna Hamilton as Stasya's Mother
Sian Nicola Liquorish as Stasya
Michael Byrne as Bolek
Heather Tobias as Mrs. Studjinsky
Leon Silver as Mr. Gryn
Sue Jones-Davies as Mrs. Gryn
Awards
It was entered into the 47th Berlin International Film Festival. Zbigniew Preisner won the Silver Bear for an outstanding single achievement and Jordan Kiziuk won an Honourable Mention.
References
External links
1997 drama films
1997 films
Danish drama films
English-language Danish films
Danish World War II films
Rescue of Jews during the Holocaust
Holocaust films
Films set in the 1940s
Films about Jews and Judaism
Films based on Israeli novels
Films directed by Søren Kragh-Jacobsen
Films scored by Zbigniew Preisner
Films set in Poland
1990s English-language films | The Island on Bird Street (film) | Biology | 245 |
46,190,365 | https://en.wikipedia.org/wiki/The%20Red%20Bed | The Red Bed is a piece of painted furniture designed by the English architect and designer William Burges made between 1865 and 1867. Built of mahogany, painted blood red and decorated with imagery of the Sleeping Beauty fairy tale, it was made for Burges's rooms at Buckingham Street, and later moved to his bedroom at The Tower House, the home he designed for himself in Holland Park. Burges wanted to fill his home with furniture decorated with paintings; they served not only their obvious practical purposes, “but spoke and told a story”. After catching a chill while engaged on works for the Marquess of Bute at Cardiff, Burges returned to the Tower House and died in the Red Bed, aged 53, on 20 April 1881.
The bed is now part of the collection of Burges furniture at The Higgins Art Gallery & Museum in Bedford.
Notes
References
1871 in art
Beds
William Burges furniture | The Red Bed | Biology | 187 |
71,107,522 | https://en.wikipedia.org/wiki/Julian%20Smith%20%28photographer%29 | Julian Augustus Romaine Smith F.R.P.S. (1873–1947) was a British-Australian surgeon and photographer.
Early life and education
Julian Smith was born on 5 December 1873 in Camberwell, Surrey, England, the son of Rose Amelia Smith (née Pooley) and Captain Julian Augustus James Smith, master mariner. His family migrated to live in Halifax Street Adelaide, Australia three years later.
He was educated at Prince Alfred College and the University of Adelaide where he obtained a Bachelor of Science in 1892 and on graduation taught at his former school, returning to University to study medicine from 1893. He rowed in the winning Adelaide university crew in 1895–1896. However a mass resignation of all honorary physicians and surgeons due to disagreement between the board of management of the Royal Adelaide Hospital and the government ceased clinical instruction, so that in 1897 Smith and seventeen other students had to move to Melbourne to complete their studies, and there he rowed in and coached the Ormond College rowing crew 1897–1898.
Smith graduated with M.B. in 1898 and B.S. in 1899 at the top of his year, with exhibitions, and prizes including that offered by the estate of Dr. James George Beaney for bacteriology in surgery. He was made senior resident medical officer at the Royal Melbourne Hospital, and was interim medical superintendent. He obtained his M.D. (Melbourne) in 1901 followed by the degree of Master of Surgery (Adelaide) in 1908, examined by Professor Welsh, of the University of Sydney, and Dr. Reissmann and Professor of operative surgery Archibald Watson of Adelaide University. His thesis was "The Treatment Surgical Tuberculosis" from his research on the treatment of tuberculosis by vaccines, in the opsonic method developed by Sir Almroth Wright, with whom Smith worked when in London.
Surgeon
In April 1901 Smith began general practice at Morwell, Gippsland where he was appointed Health Officer, with an early task of dealing with an outbreak of diphtheria. He and Edith Mary Reynolds were married by Archdeacon Langley at St Paul's Cathedral, Melbourne, on 24 September that year.
While the couple lived in Gippsland, their first son was born on 21 January 1903. In January 1906, to the regret of friends and patients, though he returned to operate on patients there until 1912, he left Morwell to practice as a junior partner in the Simpson Street, East Melbourne, surgery of Frederic Bird. Considerable attention from the press was given in 1912 to Smith's depositions supporting claimants suing the Railway Commissioners after an accident at Yea, during which Smith's and other medico's fees were questioned. Smith was called upon in subsequent years to give medical evidence in court in the cases of divorce, inheritance disputes, murder and assault, accidents and suicides.
He was appointed honorary demonstrator of surgery at the University of Melbourne in mid-1907, and also elected honorary surgeon at St Vincent's Hospital, Melbourne, and influenced its recognition as a clinical school of the university during 1909. He successfully established rooms at 59 Collins Street (later at 2 Collins Street) and a private hospital. One of his patients was Tasmanian Senator Rudolph Ready, and in 1918 Albury Anzac veteran and grazier George Robert Jackson bequeathed him £3000. The couple, then residing in Powlett St. South Yarra, purchased a holiday home, part of Glen Shian on Ballar Creek in Mt Eliza, in 1921. In 1927 he became a Foundation Fellow of the Royal Australasian College of Surgeons. Presenting Victoria at the International Cancer Conference while on holiday in London in 1928 Smith predicted that a cure for cancer was imminent, and later speaking in Australia on the use of radium in its treatment, he used Dr. Ronald G. Canti's recent film to discuss its effect on cancer cells, comparing the spread of the latter to 'Bolsheviks.' He retired from St Vincents and was appointed consulting surgeon in 1929. His long-distance phone consultation with Harley Street specialist in London Dr. Moreland McCrea concerning a life-and-death case was healed as 'epoch-making' and attracted the attention of King George V.
In 1936 he retired from practice, but in World War II returned to surgery. From his interests in haematology, he made the prototypes of a pump for transfusing blood direct from donor to patient, and devised a machine for sharpening and polishing transfusion and other needles, both inventions advanced surgical treatment. As a member of the British Medical Association in 1901–36 he promulgated views on surgery, particularly on diseases of the urinary tract, at branch meetings and his research in urology and transfusion was published in the Medical Journal of Australia.
Photographer
Recognised as a distinguished surgeon in Melbourne, Smith succeeded in a parallel career as an eminent photographer when, having taken up the medium in the 1920s and exhibiting with the Melbourne Camera Club, he devoted time to it in his late forties. He specialised in portraiture which he exhibited locally and internationally. He helped establish the Victorian Photographic Salon as a founding member in 1929 and was its president and frequently judged its exhibitions, including its International Salon. In 1946 the Australasian Photo-Review paid tribute to him; "It is safe to assume that every Australian photographer is familiar with the work of Dr. Julian Smith His artistic genius, his technical skill and his versatility are famous, not only in Australia, but throughout the whole world of pictorial photography."He was elected an honorary fellow of the Royal Photographic Society. In his early history of the medium in Australia Jack Cato asserted that Smith "had no superior in any part of the world". His portraits are in an outmoded Pictorialist style in a period of the emerging New Photography, artistically lit with orchestrated, sometimes melodramatic, poses, and printed with radical overexposure in pyrocatechin developer and bleaching-back with ferricyanide. In his more contrived, but popular, 'character study' tableaux the subject may be costumed as a protagonist from Dickens, Shakespeare, or from nursery rhymes.
Smith's character studies appeared with an article explaining his technique in Contemporary Photography,
Reception
Smith's work was widely admired in the 1920s and 1930s. Reviewing his contributions to an exhibition of the Melbourne Camera Club in July 1926, The Age newspaper wrote; "Dr. Julian Smith's work in the field of portraiture is quite distinguished by its refinement," and in a review of a May 1930 show in which his work featured, the newspaper noted that "the matter of tone (spcaking from the painter's point of view) has received close attention," especially in "such fine studies as The Prince, East Is East, and the head study, August Knapps. An outdoor study of choice quality is The Little Dock.
Smith's work served as material for discussion during the 1930s of the artistic worth of photography. Painter Arthur Streeton, reviewing the 1931 International exhibition of the Victorian Salon of Photograph at the Athenaeum Gallery, after a preamble supporting the idea that photography is art, chooses for his first comments Smith's The Painter, La Rixe ('The Brawl') and Flight. Of the same show watercolourist Blamire Young remarks on Smith's determination "to extract from his models the very utmost they can offer in the way of character and presentment. His lighting effects are still further systematised, and his control of his medium appears to be on the verge of the absolute," hailing his portrait of John Shirlow "as good as anything Dr. Smith has done. It shows the fine feeling for type which guides him in the selection of his sitters, and which so frequently places his work in the front rank," though, at odds with Streeton he condemns the "crudity of ... design" in La Rixe which "reminds of the gulf which still separates photography from fine art."
By 1933 the Australasian Photo-Review was more specific about the effect of his portraits and 'character studies';
Dr Julian Smith is represented by four of his capable portrait studies; perhaps character studies would be a more apt description. He uses emphasis of lighting in a dramatic way, and thus heightens the drama already suggested by the disposition of the model.
He achieved international recognition; the American Annual of Photography featured his "My Aims and Methods" in 1941. Unafraid to express his forthright opinions, in 1935 after the 3rd Canadian salon, he wrote to Eric Brown, director of the National Gallery of Canada, to complain "about the selection methods, the acceptance of photogravure as a photographic process, the recognition or not of certain technical processes" and the definition of "experimental photography."
Portraitist
Smith was a mentor to portraitist and fashion photographer Athol Shmith, whose studio was also in the 'Paris End' of Collins Street, Melbourne.
Julian Smith's subjects, his fellow medicos include biochemist Marjorie Bick, virologist Frank Macfarlane Burnet, pathologist Howard Florey, Royal Physician Thomas Horder, anatomist Professor Frederic Wood Jones, Dr. John Dale, Dr. Thomas Wood; and other celebrated Australians aviator Charles Kingsford Smith, Colonel Walter E. Summons, Brigadier Neil Hamilton Fairley; writer Robert Henderson Croll, and poets John Shaw Neilson, and Bernard O'Dowd; dancer Sono Osato; actors Gregan McMahon, and Frank Talbot; artists John Shirlow, Murray Griffin, William Dargie, and Lionel Lindsay, photographers Harold Cazneaux (who also photographed Smith), Dudley Johnston, E. B. Hawkes, Monte Luke James E. Paton and F. C. Tilney; politician Alfred Stephen; Gwendolyn M. Bernard; businessman Sir Robert Gibson; Beatrice Baillieu, and community worker and writer Paquita Mawson.
Legacy
Smith died of cancer on 13 November 1947 at his East Melbourne home aged 74, and was cremated at Springvale with Anglican rites. His wife Edith, sons Dr Orme Smith, Dr Geoffrey Smith (dentist), Dr Hubert Smith, and daughter Roma (Mrs Page) survived him.
Smith was a pigeon breeder and valued it as a hobby and for its commercial possibilities, proclaiming that "the squab is highly nutritious and in all diseases which caused a loss of tissue there was nothing in the albuminous type of meat to be compared with the flesh of the pigeon. He was also known for dancing to relax between operations in the surgery; writer Joan Lindsay remembered that "trifling eccentricities ... gave Dr Julian his unique flavour. Behind the rather petulant façade he was a good, clever and kindly man, mourned by thousands of friends and patients when he died."
In 1943 Smith saw and was impressed by the drawings of a young man Russell Drysdale who was in hospital in Melbourne for an operation on his left eye, and he introduced him to Daryl Lindsay, through whom Drysdale met George Bell of the Contemporary Art Society which promoted modernist European styles, and he encouraged Drysdale to consider becoming a professional artist.
W. B. McInnes's portrait of Dr Julian Smith won the Archibald in 1936. Posthumously, Kodak published a portfolio of Smith's portraits, Fifty Masterpieces of Photography.
Exhibitions
Group
1926, July: Melbourne Camera Club, Kodak Salon, 161 Swanston Street, Melbourne
1930, May: Everymans Library, Collins Street, Melbourne
1930, July: Victorian Salon of Photography exhibition, Fine Art Society, 100 Exhibition St., Melbourne
1931, 1–12 September: International exhibition of the Victorian Salon of Photograph, Athenaeum Gallery
1939, 7–19 August: international camera pictures. Opened by Harold B. Herbert Athenaeum Gallery, 188 Collins Street, Melbourne
Posthumous
1948, 5–23 April: The Dr. Julian Smith Memorial Collection, Kodak Salon Galleries, 386 George Street, Sydney
1958, September to November: The Memorial Exhibition of Character Portrait Studies by the late Dr Julian Smith, The Kodak Galleries, Sep – Nov 1958
Collections
National Portrait Gallery
National Library of Australia
State Library of Victoria
National Gallery of Victoria
Art Gallery of New South Wales
Adelaide University Research and Scholarship Collection
Gallery
References
External links
1873 births
1947 deaths
Australian photographers
Fellows of the Royal Photographic Society
Portrait photographers
Pictorialists
Australian surgeons
British emigrants to Australia
Australian urologists
Vaccinologists
Deaths from cancer in Australia
Australian portrait photographers
University of Adelaide alumni
People educated at Prince Alfred College | Julian Smith (photographer) | Biology | 2,565 |
54,686,113 | https://en.wikipedia.org/wiki/Downcast%20%28app%29 | Downcast is a podcast client application for iOS, macOS, and watchOS. It was originally developed by Seth McFarland of Jamawkinaw Enterprises LLC and is currently being developed and maintained by George Cox of Tundaware LLC.
References
External links
Mobile applications
IOS software
Podcasting software | Downcast (app) | Technology | 61 |
54,541,552 | https://en.wikipedia.org/wiki/Cristina%20S%C3%A1nchez%20%28molecular%20biologist%29 | Dr. Cristina Sánchez is a Spanish molecular biologist.
She was born in Madrid, Spain in 1971.
She started her scientific career as an undergraduate student at the laboratory of Dr. Ramos and Dr. Fernández-Ruiz at the Complutense University of Madrid in 1994.
She obtained her PhD with Honors in Biochemistry and Molecular Biology at Complutense University in 2000 and went into postdoc studying the antitumoral and other properties of medical cannabis, especially cancer and the therapeutic qualities of cannabinoids.
She has been vocal about popularizing the healing apoptotic effect of cannabinoids on cannabinoid receptor containing cancer cells while leaving the healthy cannabinoid receptor containing cells be.
References
1971 births
Spanish biologists
Living people
Women molecular biologists
Spanish women scientists
Molecular biologists
21st-century Spanish biologists
21st-century women scientists
Complutense University of Madrid alumni | Cristina Sánchez (molecular biologist) | Chemistry | 184 |
55,595,414 | https://en.wikipedia.org/wiki/NGC%201978 | NGC 1978 (also known as ESO 85-SC90) is an elliptical shaped globular cluster or open cluster in the constellation Dorado. It is located within the Large Magellanic Cloud. It was discovered by James Dunlop on November 6, 1826. At an aperture of 50 arcseconds, its apparent V-band magnitude is 10.20, but at this wavelength, it has 0.16 magnitudes of interstellar extinction. It appears 3.9 arcminutes wide. NGC 1978 has a radial velocity of 293.1 ± 0.9 km/s.
The northwest half of NGC 1978 is iron-rich and younger whereas the southeast part of the cluster has very little iron. NGC 1978 is also highly elliptical (ε ~ 0.30 ± 0.02), suggesting tidal action between it and the Large Magellanic Cloud. It is rich in pulsating asymptotic giant branch stars, often oxygen-rich or carbon-rich. NGC 1978 is about 2 billion years old. Its estimated mass is , and its total luminosity is , leading to a mass-to-luminosity ratio of 0.40 /. All else equal, older star clusters have higher mass-to-luminosity ratios; that is, they have lower luminosities for the same mass.
References
External links
Globular clusters
ESO objects
1978
Dorado
Large Magellanic Cloud
18260906 | NGC 1978 | Astronomy | 295 |
57,381,332 | https://en.wikipedia.org/wiki/Saltwater%20intrusion%20in%20California | The State of California enforces several methodologies through technical innovation and scientific approach to combat saltwater intrusion in areas vulnerable to saltwater intrusion. Seawater intrusion is either caused by groundwater extraction or increased in sea level. For every , sea-salty waters rises as the cone of depression forms. Salinization of groundwater is one of the main water pollution ever produced by mankind or from natural processes. It degrades water quality to the point it passes acceptable drink water and irrigation standards.
Monitoring Seawater Intrusion
Understanding the extent and rate of saltwater intrusion are key elements for sustainable water management. Ineffective management means low water quality for urban sectors and agriculture. Effective management strategies include monitoring seawater intrusion in areas prone to saltwater intrusion. Common approach for monitoring seawater intrusion include measuring groundwater level, hydrograph analysis, water quality sampling and geophysical logging. These procedures provide discrete and tangible information for early-warning signs regarding saltwater intrusion adjacent to lands and groundwater aquifers. Airborne electromagnetic measurement is used by helicopters to map out electrical resistivity. This method can provide useful information concerning water quality over in a day by penetrating through sea surfaces to a depth of . Using airborne geophysical measurement yields useful data for interpretation and hydrological information.
Los Angeles County
Groundwater basin in Los Angeles County is considered as a vital resource both for agricultures and residential areas. For more than 40 years, Los Angeles County have managed to protect local groundwater basins from seawater intrusion. By injecting freshwater along coastal regions, Los Angeles County tends to create hydraulic gradients between freshwater and saltwater. This prevents saltwater to advance further inland. One critical factor affecting water supply in Los Angeles is population growth. As population growth increases in Los Angeles County, saltwater intrusion tends to advance further inland into Los Angeles groundwater aquifers. This occurs due to population growth, demanding an excess amount of freshwater from groundwater pumping wells. This sets an hydrologic condition for saltwater to follow the geomorphic pressure gradient produced landward. A cone of depression develops as a result through pumping wells operation to supply water for residential areas and agriculture. To combat saltwater intrusion, Los Angeles water districts decides to construct injection wells to form an hydraulic barrier, preventing advancement of saltwater intrusion in Los Angeles aquifers. Geologists, however, continue to study and survey Los Angeles County coastline because creating this hydraulic gradients is not fully efficient. To better understand saltwater intrusion in Los Angeles County, the U.S Geological Survey partners with Water Replenishment District of Southern California and Los Angeles County Department of Public Works to conduct a geological survey through using reflection seismology. This means that seismic profiles is essential to understand how sedimentation influence saltwater intrusion.
Managing Seawater Intrusion
The Alamitos Barrier Project is one of the three hydraulic barriers in Los Angeles County. It was created mainly to protect groundwater supplies from seawater intrusion. It is currently operated under Los Angeles County Flood Control District and the Orange County Water District. Other joint committees include the Water Replenishment District of Southern California who is responsible for supplying water to each hydraulic barrier and then the County of Los Angeles Department of Public Works who operates the projects on a daily basis. The effects of seawater intrusion took first noticed in 1956. As a response, a coastal barrier project was later built by the Orange County Water District to combat saltwater intrusion which remains prominent and troublesome to this day. Known as Water Factory 21, the District built in seven extraction wells located 2 miles away from the coast to intercept and send saltwater back into the sea. A series of 23 injection wells were also built further inland to create a powerful hydraulic barrier between saltwater and freshwater. The water supplies of Water Factory 21 undergo several phases before it reaches the injection wells. This man-made hydraulic processes includes air stripping, recarbonation, multi-media filtration, carbon sequestration and chlorination. 23,000 acres feet of water is produced each year to supply this amount of water into each injection wells to form effective yet inefficient hydraulic barrier. After each water droplet goes through each treatment, the injections wells distributes this vast amount of freshwater into the ocean and into the groundwater basin. Majority of this freshwater is flowed into the groundwater basin to meet consumers demands.
Sacramento San-Joaquin Delta
Both the levee system and delta islands help protect freshwater hydrology and municipal water treatment facilities from saltwater intrusion. Under extreme drought conditions, the combined flow of fresh water from all of the San Joaquin river's tributaries is no longer sufficient to stem the brackish flows that come in from the bay on every tidal cycle. State officials have gone so far as to build levees across major saltwater in-flows in times of especially severe drought. Saltwater intrusion is temporarily stemmed in spring months when snow melt and rain runoff increase water volumes carried by the San Joaquin and Sacramento rivers, fending off saltwater intrusion. It is expected that the issue of saltwater intrusion in this delta will get worse as climate cycles affected by climate change push California further into drought, as stream flows further decrease in summer months after snowpack support has waned. Before human intervention, saltwater regularly flooded the marshes in the Delta, but the location of pumping stations providing water for agricultural and domestic use means that saltwater intrusion would be catastrophic for state's water supply. The health of the naturally formed barrier islands is critical for continued salt water exclusion, and is an active area of research.
Agricultural Drainage in the Delta
In the southernmost part of the Delta, the concentration of saltwater content increases as farmers irrigate their crops for fresh produce. The agricultural drainage water is where salinization intensified through the process of irrigation. In some occasion, there may be no delta water that is left to flush out and push back saltwater content within the delta, specifically in the south Delta. This creates a localize salinity problems for water managers to address or mitigate since salinity is highly concentrated.
Suisun Marsh
The Suisun Marsh is one of the largest brackish water wetlands in the Sacramento-San Joaquin Delta. This aquatic habitat is where freshwater and saltwater meets. Here lies 230 miles of levees protecting the Suisun Marsh. The Delta salinity greatly influenced the overall health of the Suisun Marsh. This include the ecosystem in this area, encompassing living plants and neighboring species. The State Water Project's Suisun Marsh Salinity Control Gates manages tidal flows to limit saltwater intrusion from salty tidal flows. The California's Department of Water Resources built this tidal-flow control gate to limit high saline first introduced from Grizzly Bay and through the Montezuma Slough.
References
Environmental issues with water
Hydrology
Water in California | Saltwater intrusion in California | Chemistry,Engineering,Environmental_science | 1,362 |
1,204,311 | https://en.wikipedia.org/wiki/Autopen | An autopen (or signing machine) is a device used for the automatic signing of a signature. Prominent individuals may be asked to provide their signatures many times a day, such as celebrities receiving requests for autographs, or politicians signing documents and correspondence in their official capacities. Consequently, many public figures employ autopens to allow their signature to be printed on demand and without their direct involvement.
Though manual precursors of the modern autopen have existed since at least 1803, 21st-century autopens are machines that are programmed with a signature, which is then reproduced by a motorized, mechanical arm holding a pen.
Given the exact verisimilitude to the real hand signature, the use of the autopen allows for a small degree of wishful thinking and plausible deniability as to whether a famous autograph is real or reproduced, thus increasing the perception of the personal value of the signature by the lay recipient. However, known or suspected autopen signatures are also vastly less valuable as philographic collectibles; legitimate hand-signed documents from individuals known to also use an autopen usually require verification and provenance to be considered valid.
Early autopens used a plastic matrix of the original signature which is a channel cut into an engraved plate in the shape of a wheel. A stylus driven by an electric motor followed the x- and y-axis of a profile or shape engraved in the plate (which is why it is called a matrix). The stylus is mechanically connected to an arm which can hold almost any common writing instrument, so the favourite pen and ink can be used to suggest authenticity. The autopen signature is made with even pressure (and indentation in the paper), which is how these machines are distinguishable from original handwriting where the pressure varies.
History
The first signature duplicating machines were developed by Englishman John Isaac Hawkins. Hawkins received a United States patent for his device in 1803, called a polygraph (an abstracted version of the pantograph), in which the user may write with one pen and have their writing simultaneously reproduced by an attached second pen. Thomas Jefferson used the device extensively during his presidency. This device bears little resemblance to today's autopens in design or operation. The autopen called the Robot Pen was developed in the 1930s, and became commercially available in 1937 (used as a storage unit device, similar in principle to how vinyl records store information) to record a signer's signature. A small segment of the record could be removed and stored elsewhere to prevent misuse. The machine would then be able to mass-produce a template signature when needed.
While the Robot Pen was commercially available, the first commercially successful autopen was developed by Robert M. De Shazo Jr., in 1942. De Shazo developed the technology that became the modern autopen in reference to a Request For Quote (RFQ) from the Navy, and in 1942, received an order for the machine from the Secretary of the Navy. This was the beginning of a significant market in government for the autopen, as the machines soon ended up in the offices of members of Congress, the Senate and the Executive branches. At one point, De Shazo estimated there were more than 500 autopens in use in Washington, D.C.
Use
Individuals who use autopens often do not disclose this publicly. Signatures generated by machines are valued less than those created manually, and perceived by their recipients as somewhat inauthentic. In 2004, Donald Rumsfeld, then the U.S. Secretary of Defense, incurred criticism after it was discovered that his office used an autopen to sign letters of condolence to families of American soldiers who were killed in war.
Outside of politics, it was reported in November 2022 that some copies of The Philosophy of Modern Song, a book by singer-songwriter Bob Dylan that had been published earlier that month, had been signed with an autopen, resulting in criticism. Autographed editions had been marketed as "hand-signed" and priced at US$600 each. Both Dylan and the book's publisher, Simon & Schuster, issued apologies; refunds were also offered to customers who had bought autopen-signed editions. In addition, Dylan also said that some prints of his artwork sold after 2019 had been signed with an autopen, which he further apologized for and attributed his use of the machine to vertigo and the COVID-19 pandemic, the latter of which prevented him from meeting with staff to facilitate signing the works in question.
U.S. Presidents
It has long been known that the president of the United States uses multiple autopen systems to sign many official documents (e.g., military, diplomatic, and judicial commissions; some Acts of Congress, executive directives, letters and other correspondence), due to the volume of such documents requiring their signature per the U.S. Constitution. Some say Harry Truman was the first president to use the autopen as a way of responding to mail and signing checks. Others credit Gerald Ford as the first president to openly acknowledge his use of the autopen, but Lyndon Johnson allowed photographs of his autopen to be taken while he was in office, and in 1968 the National Enquirer ran them along with the front-page headline "The Robot That Sits In For The President."
While visiting France, Barack Obama authorized the use of an autopen to create his signature which signed into law an extension of three provisions of the Patriot Act. On January 3, 2013, he signed the extension to the Bush tax cuts, using the autopen while vacationing in Hawaii. In order to sign it by the required deadline, his other alternative would have been to have had the bill flown to him overnight. Republican leaders questioned whether this use of the autopen met the constitutional requirement for signing a bill into law, but the validity of presidential use of an autopen had not been actually tested in court. In 2005, George W. Bush asked for and received a favorable opinion from the Department of Justice regarding the constitutionality of using the autopen, but did not use it himself.
In May 2024, Joe Biden directed an autopen be used to sign legislation providing a one-week funding extension for the Federal Aviation Administration. Biden was traveling in San Francisco at the time, and wished to avoid any lapse in FAA operations, while a five-year funding bill was being voted on by Congress.
Similar devices
Further developing the class of devices known as autopens, Canadian author Margaret Atwood created a device called the LongPen, which allows audio and video conversation between the fan and author while a book is being signed remotely.
See also
Plotter
John Hancock
Rubber stamp (politics)
Seal (East Asia)
Telautograph
References
External links
A site with information on autopen signatures
collectSPACE: Identifying Astronaut Autopens
Machines
Identity documents
19th-century inventions
English inventions | Autopen | Physics,Technology,Engineering | 1,400 |
74,978,984 | https://en.wikipedia.org/wiki/Music%20%28Xperia%29 | Music, formerly known as Walkman, is an audio player software for Android. Developed by Sony Corporation (and previously by Sony Mobile), it is the default music player on Xperia devices and comes preloaded on them.
A similar Walkman app continues to exist on Walkman digital audio players, including those that run on Android.
History
Music was launched as Walkman in 2012, debuting on Sony's first in-house smartphones the Xperia S, Xperia P and Xperia U. It replaced the previous Music Player app on Sony Ericsson devices. Before this, the music player on the Sony Ericsson Zylo and Live with Walkman were also called Walkman.
The Walkman app included all the features that were found on the digital Walkman portable music players of the time. It featured an interface similar to the 'Cover Flow' of the iPod. Features included SensMe channels and Music Unlimited integration.
"Smart Playlists" was introduced in a version 8.0 update along with further TrackID integration. There were some UI tweaks in version 8.4 in 2014. Android Wear support was also soon added.
In 2015 with Android 5.0 Lollipop, the Walkman app was renamed to Music. A 'Quick Play' feature was later added. SensMe was removed in 2016. Some later visual updates made the player styled like Material Design.
Music also featured (on some models) DSEE HX sound processing. DSEE Ultimate is featured starting with Xperia 1 II.
The app became stripped down eventually with the equalizer and headphone settings disappearing.
As of version 9.4.7 (May 2020), the Gracenote metadata feature in the Music app was removed.
Features
Dark theme
Playlists in .m3u format
Sleep timer
Sound effects
See also
Comparison of audio player software
List of music software
References
Audio software
Android (operating system) software
Android media players | Music (Xperia) | Engineering | 389 |
69,380,218 | https://en.wikipedia.org/wiki/Hypergraph%20regularity%20method | In mathematics, the hypergraph regularity method is a powerful tool in extremal graph theory that refers to the combined application of the hypergraph regularity lemma and the associated counting lemma. It is a generalization of the graph regularity method, which refers to the use of Szemerédi's regularity and counting lemmas.
Very informally, the hypergraph regularity lemma decomposes any given -uniform hypergraph into a random-like object with bounded parts (with an appropriate boundedness and randomness notions) that is usually easier to work with. On the other hand, the hypergraph counting lemma estimates the number of hypergraphs of a given isomorphism class in some collections of the random-like parts. This is an extension of Szemerédi's regularity lemma that partitions any given graph into bounded number parts such that edges between the parts behave almost randomly. Similarly, the hypergraph counting lemma is a generalization of the graph counting lemma that estimates number of copies of a fixed graph as a subgraph of a larger graph.
There are several distinct formulations of the method, all of which imply the hypergraph removal lemma and a number of other powerful results, such as Szemerédi's theorem, as well as some of its multidimensional extensions. The following formulations are due to V. Rödl, B. Nagle, J. Skokan, M. Schacht, and Y. Kohayakawa, for alternative versions see Tao (2006), and Gowers (2007).
Definitions
In order to state the hypergraph regularity and counting lemmas formally, we need to define several rather technical terms to formalize appropriate notions of pseudo-randomness (random-likeness) and boundedness, as well as to describe the random-like blocks and partitions.
Notation
denotes a -uniform clique on vertices.
is an -partite -graph on vertex partition .
is the family of all -element vertex sets that span the clique in . In particular, is a complete -partite -graph.
The following defines an important notion of relative density, which roughly describes the fraction of -edges spanned by -edges that are in the hypergraph. For example, when , the quantity is equal to the fraction of triangles formed by 2-edges in the subhypergraph that are 3-edges. Definition [Relative density]. For , fix some classes of with . Suppose is an integer. Let be a subhypergraph of the induced -partite graph . Define the relative density .What follows is the appropriate notion of pseudorandomness that the regularity method will use. Informally, by this concept of regularity, -edges () have some control over -edges (). More precisely, this defines a setting where density of edges in large subhypergraphs is roughly the same as one would expect based on the relative density alone. Formally,Definition [()-regularity]. Suppose are positive real numbers and is an integer. is ()-regular with respect to if for any choice of classes and any collection of subhypergraphs of satisfying we have .Roughly speaking, the following describes the pseudorandom blocks into which the hypergraph regularity lemma decomposes any large enough hypergraph. In Szemerédi regularity, 2-edges are regularized versus 1-edges (vertices). In this generalized notion, -edges are regularized versus -edges for all . More precisely, this defines a notion of regular hypergraph called -complex, in which existence of -edge implies existence of all underlying -edges, as well as their relative regularity. For example, if is a 3-edge then ,, and are 2-edges in the complex. Moreover, the density of 3-edges over all possible triangles made by 2-edges is roughly the same in every collection of subhypergraphs.Definition [-regular -complex]. An -complex is a system of -partite graphs satisfying . Given vectors of positive real numbers , , and an integer , we say -complex is -regular if
For each , is -regular with density .
For each , is ()-regular with respect to .The following describes the equitable partition that the hypergraph regularity lemma will induce. A -equitable family of partition is a sequence of partitions of 1-edges (vertices), 2-edges (pairs), 3-edges (triples), etc. This is an important distinction from the partition obtained by Szemerédi's regularity lemma, where only vertices are being partitioned. In fact, Gowers demonstrated that solely vertex partition can not give a sufficiently strong notion of regularity to imply Hypergraph counting lemma. Definition [-equitable partition]. Let be a real number, be an integer, and , be vectors of positive reals. Let be a vector of positive integers and be an -element vertex set. We say that a family of partitions on is -equitable if it satisfies the following:
is equitable vertex partition of . That is .
partitions so that if and then is partitioned into at most parts, all of which are members .
For all but at most -tuples there is unique -regular -complex such that has as members different partition classes from and .Finally, the following defines what it means for a -uniform hypergraph to be regular with respect to a partition. In particular, this is the main definition that describes the output of hypergraph regularity lemma below.Definition [Regularity with respect to a partition]. We say that a -graph is -regular with respect to a family of partitions if all but at most edges of have the property that and if is unique -complex for which , then is regular with respect to .
Statements
Hypergraph regularity lemma
For all positive real , , and functions , for there exists and so that the following holds. For any -uniform hypergraph on vertices, there exists a family of partitions and a vector so that, for and where for all , the following holds.
is a -equitable family of partitions and for every .
is regular with respect to .
Hypergraph counting lemma
For all integers the following holds: and there are integers and so that, with , , and ,
if is a -regular complex with vertex partition and , then
.
Applications
The main application through which most others follow is the hypergraph removal lemma, which roughly states that given fixed and large -uniform hypergraphs, if contains few copies of , then one can delete few hyperedges in to eliminate all of the copies of . To state it more formally,
Hypergraph removal lemma
For all and every , there exists and so that the following holds. Suppose is a -uniform hypergraph on vertices and is that on vertices. If contains at most copies of , then one can delete hyperedges in to make it -free. One of the original motivations for graph regularity method was to prove Szemerédi's theorem, which states that every dense subset of contains an arithmetic progression of arbitrary length. In fact, by a relatively simple application of the triangle removal lemma, one can prove that every dense subset of contains an arithmetic progression of length 3.
The hypergraph regularity method and hypergraph removal lemma can prove high-dimensional and ring analogues of density version of Szemerédi's theorems, originally proved by Furstenberg and Katznelson. In fact, this approach yields first quantitative bounds for the theorems.
This theorem roughly implies that any dense subset of contains any finite pattern of . The case when and the pattern is arithmetic progression of length some length is equivalent to Szemerédi's theorem. Furstenberg and Katznelson Theorem
Source:
Let be a finite subset of and let be given. Then there exists a finite subset such that every with contains a homothetic copy of . (i.e. set of form , for some and )
Moreover, if for some , then there exists such that has this property for all .Another possible generalization that can be proven by the removal lemma is when the dimension is allowed to grow. Tengan, Tokushige, Rödl, and Schacht Theorem
Let be a finite ring. For every , there exists such that, for , any subset with contains a coset of an isomorphic copy of (as a left -module).
In other words, there are some such that , where , is an injection.
References
Graph theory | Hypergraph regularity method | Mathematics | 1,765 |
15,064,796 | https://en.wikipedia.org/wiki/BOK%20%28gene%29 | Bok (Bcl-2 related ovarian killer) is a protein-coding gene of the Bcl-2 family that is found in many invertebrates and vertebrates. It induces apoptosis, a special type of cell death. Currently, the precise function of Bok in this process is unknown.
Discovery and homology
In 1997, the protein Bcl-2-related ovarian killer (Bok) was identified in a yeast two-hybrid experiment with a rat ovarian cDNA library in a screen for proteins interacting with Mcl-1, an abundant anti-apoptotic protein. The overexpression of Bok induces apoptosis. Because of its high sequence similarity to Bak and Bax, Bok is classified as a member of the Bcl-2 protein family.
The mouse homologue of Bok is called Matador (Mtd). This name is derived from the Latin term mactator which means butcher or killer. Additionally, homologous proteins were found in Drosophila melanogaster (fruit fly) and Gallus gallus (chicken).
Promoter and gene structure
The human BOK promoter is activated by the overexpression of members of the E2F hand transcription factor family. Typically, these transcription factors are involved in the promotion of S-phase, so there might be a connection between Bok expression and cell cycle progression. Due to this regulation of Bok expression by the cell cycle, it was proposed that Bok sensitizes growing cells to stress-induced apoptosis.
Bok mRNA comprises five exons which code for a 213 amino acid protein, called Bok-L. This protein consists of four Bcl-2 homology domains (abbreviated BH1, BH2, BH3, BH4, respectively) and a C-terminal transmembrane region (Figure 1). Its BH3 domain contains a stretch with many leucine residues. This is unique among the Bcl-2 family members. The leucine-rich stretch functions as a nuclear export signal. It is recognized by the nuclear exportin Crm1. Mutations in the leucine-rich stretch impair the binding of Crm1 to Bok. Consequently, Bok accumulates in the nucleus and triggers apoptosis.
Splice variants
Due to alternative splicing, Bok mRNA gives rise to different Bok proteins: Figure 1 illustrates the different splice variants schematically. Full length Bok is named Bok-L.
The shorter version, Bok-S, lacks exon 3. This results in a fusion of the BH3 domain with the BH1 domain. The BH3 domain is involved in the interaction of Bok with Mcl-1 and other molecules. It is dispensable for the induction of apoptosis. Expression of Bok-S may be an immediate response to stress signals. It has been shown to induce apoptosis regardless of the presence of anti-apoptotic molecules.
Another splice variant termed Bok-P was found in placental tissue from patients with pre-eclampsia. While Bok-S misses exon 3, Bok-P lacks exon 2. This deletion includes the BH4 domain and parts of the BH3 domain. Bok-P may be the cause for trophoblast cell death in pre-eclampsia, a dangerous pregnancy complication. In pre-eclampsia, typical alterations occur in the maternal kidney and lead to hypertension and proteins in the urine. To date, the cause of this medical condition as well as an appropriate treatment have not been discovered.
Expression pattern
The Bok gene is activated and produces protein in different tissues. In mice, elevated Bok levels were detected in the ovary, the testis, and the uterus. Nevertheless, it also exists in the brain and at low levels in most other tissues. However, the expression pattern of the Bok gene varies among species.
In humans, Bok is found in a wide range of tissues. The gene is expressed in the colon, the stomach, the testes, the placenta, the pancreas, the ovaries, and the uterus. Furthermore, more Bok is expressed in fetal tissue compared to adult tissue. Thus, Bok may influence development.
Subcellular localization
The subcellular localization of Bok protein is controversial. In proliferating cells, Bok is found in the nucleus. Upon induction of apoptosis, it was found to tightly associate with mitochondrial membranes. On the other hand, another group found Bok shuttling between the cytoplasm and the nucleus. In their experiments, increased nuclear (not mitochondrial) localization correlated with a stronger apoptotic activity.
Regulation
It was found that the cellular ratio of pro-apoptotic to anti-apoptotic Bcl-2 family members effects late apoptotic events such as release of cytochrome c from the mitochondria and the activation of caspases. Higher levels of pro-apoptotic proteins compared to anti-apoptotic proteins seem to cause apoptosis. In a current model, the formation of heterodimers between pro-apoptotic and anti-apoptotic proteins prevents induction of apoptosis.
Interactions
The binding of Bok to its interacting partners seems to be mediated by its BH3 domain. The splice variant Bok-S lacks this domain and is unable to form heterodimers with other proteins of the Bcl-2 family.
In yeast two-hybrid experiments, Bok was found to interact with the anti-apoptotic proteins Mcl-1, BHRF-1, and Bfl-1. However, interactions with other anti-apoptotic proteins such as Bcl-2, Bcl-xL, and Bcl-w were not detectable (1). Later studies aimed at confirming an interaction between Bok and pro-apoptotic Bak or Bax but were not successful.
Accordingly, coexpression of anti-apoptotic proteins such as Mcl-1 suppresses apoptosis induced by Bok overexpression. Consistent with the results mentioned above, coexpression of anti-apoptotic Bcl-2 does not prevent Bok-induced apoptosis.
Knock-out mouse
Since its discovery in 1997, several attempts have been made to characterize Bok. Due to the increased expression levels in fetal tissue, scientists anticipated a developmental role for Bok. Recently, the Bok knock-out mouse was created. This mouse shows, however, no developmental defects and normal fertility. This finding indicates that the function of Bok seems to overlap with the function of the related pro-apoptotic proteins Bak and Bax.
Several other roles were proposed for Bok, especially in developing cells. Since the action of Bok in triggering apoptosis seems to be redundant, it is difficult to assign a specific role to Bok in the presence of Bak and Bax. The study of cells deficient in Bak and Bok or deficient in Bax and Bok, respectively, could help to better characterize the role of Bok in apoptosis. If Bok exerts a critical function, it is likely that this function is limited to certain circumstances, e.g. specific cell types, stress conditions. Thus, these aspects should be assessed in more detail to analyze the physiological and pathological role of Bok.
References
External links
Genes
Protein families | BOK (gene) | Biology | 1,590 |
2,770,340 | https://en.wikipedia.org/wiki/OS-level%20virtualization | OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, including containers (LXC, Solaris Containers, AIX WPARs, HP-UX SRP Containers, Docker, Podman), zones (Solaris Containers), virtual private servers (OpenVZ), partitions, virtual environments (VEs), virtual kernels (DragonFly BSD), and jails (FreeBSD jail and chroot). Such instances may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. Programs running inside a container can only see the container's contents and devices assigned to the container.
On Unix-like operating systems, this feature can be seen as an advanced implementation of the standard chroot mechanism, which changes the apparent root folder for the current running process and its children. In addition to isolation mechanisms, the kernel often provides resource-management features to limit the impact of one container's activities on other containers. Linux containers are all based on the virtualization, isolation, and resource management mechanisms provided by the Linux kernel, notably Linux namespaces and cgroups.
Although the word container most commonly refers to OS-level virtualization, it is sometimes used to refer to fuller virtual machines operating in varying degrees of concert with the host OS, such as Microsoft's Hyper-V containers. For an overview of virtualization since 1960, see Timeline of virtualization technologies.
Operation
On ordinary operating systems for personal computers, a computer program can see (even though it might not be able to access) all the system's resources. They include:
Hardware capabilities that can be employed, such as the CPU and the network connection
Data that can be read or written, such as files, folders and network shares
Connected peripherals it can interact with, such as webcam, printer, scanner, or fax
The operating system may be able to allow or deny access to such resources based on which program requests them and the user account in the context in which it runs. The operating system may also hide those resources, so that when the computer program enumerates them, they do not appear in the enumeration results. Nevertheless, from a programming point of view, the computer program has interacted with those resources and the operating system has managed an act of interaction.
With operating-system-virtualization, or containerization, it is possible to run programs within containers, to which only parts of these resources are allocated. A program expecting to see the whole computer, once run inside a container, can only see the allocated resources and believes them to be all that is available. Several containers can be created on each operating system, to each of which a subset of the computer's resources is allocated. Each container may contain any number of computer programs. These programs may run concurrently or separately, and may even interact with one another.
Containerization has similarities to application virtualization: In the latter, only one computer program is placed in an isolated container and the isolation applies to file system only.
Uses
Operating-system-level virtualization is commonly used in virtual hosting environments, where it is useful for securely allocating finite hardware resources among a large number of mutually-distrusting users. System administrators may also use it for consolidating server hardware by moving services on separate hosts into containers on the one server.
Other typical scenarios include separating several programs to separate containers for improved security, hardware independence, and added resource management features. The improved security provided by the use of a chroot mechanism, however, is not perfect. Operating-system-level virtualization implementations capable of live migration can also be used for dynamic load balancing of containers between nodes in a cluster.
Overhead
Operating-system-level virtualization usually imposes less overhead than full virtualization because programs in OS-level virtual partitions use the operating system's normal system call interface and do not need to be subjected to emulation or be run in an intermediate virtual machine, as is the case with full virtualization (such as VMware ESXi, QEMU, or Hyper-V) and paravirtualization (such as Xen or User-mode Linux). This form of virtualization also does not require hardware support for efficient performance.
Flexibility
Operating-system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. For example, with Linux, different distributions are fine, but other operating systems such as Windows cannot be hosted. Operating systems using variable input systematics are subject to limitations within the virtualized architecture. Adaptation methods including cloud-server relay analytics maintain the OS-level virtual environment within these applications.
Solaris partially overcomes the limitation described above with its branded zones feature, which provides the ability to run an environment within a container that emulates an older Solaris 8 or 9 version in a Solaris 10 host. Linux branded zones (referred to as "lx" branded zones) are also available on x86-based Solaris systems, providing a complete Linux user space and support for the execution of Linux applications; additionally, Solaris provides utilities needed to install Red Hat Enterprise Linux 3.x or CentOS 3.x Linux distributions inside "lx" zones. However, in 2010 Linux branded zones were removed from Solaris; in 2014 they were reintroduced in Illumos, which is the open source Solaris fork, supporting 32-bit Linux kernels.
Storage
Some implementations provide file-level copy-on-write (CoW) mechanisms. (Most commonly, a standard file system is shared between partitions, and those partitions that change the files automatically create their own copies.) This is easier to back up, more space-efficient and simpler to cache than the block-level copy-on-write schemes common on whole-system virtualizers. Whole-system virtualizers, however, can work with non-native file systems and create and roll back snapshots of the entire system state.
Implementations
Linux containers not listed above include:
LXD, an alternative wrapper around LXC developed by Canonical
Podman, an advanced Kubernetes ready root-less secure drop-in replacement for Docker with support for multiple container image formats, including OCI and Docker images
Charliecloud, a set of container tools used on HPC systems
Kata Containers MicroVM Platform
Bottlerocket is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts
Azure Linux is an open-source Linux distribution that is purpose-built by Microsoft Azure and similar to Fedora CoreOS
See also
Container Linux
Container orchestration
Flatpak package manager
Linux cgroups
Linux namespaces
Hypervisor
Portable application creators
Open Container Initiative
Sandbox (software development)
Separation kernel
Serverless computing
Snap package manager
Storage hypervisor
Virtual private server (VPS)
Virtual resource partitioning
Notes
References
External links
An introduction to virtualization
A short intro to three different virtualization techniques
Virtualization and containerization of application infrastructure: A comparison, June 22, 2015, by Mathijs Jeroen Scheepers
Containers and persistent data, LWN.net, May 28, 2015, by Josh Berkus
Operating system security
Virtualization
Linux
Linux containerization
Linux kernel features | OS-level virtualization | Engineering | 1,554 |
48,017,959 | https://en.wikipedia.org/wiki/Tolerable%20weekly%20intake | Tolerable weekly intake (TWI) estimates the amount per unit body weight of a potentially harmful substance or contaminant in food or water that can be ingested over a lifetime without risk of adverse health effects. TWI is generally preceded by "provisional" to indicate insufficient data exists, increasing uncertainty. The term TWI should be reserved for when there is a well-established and internationally accepted tolerance, backed by sound and uncontested data. Although similar in concept to tolerable daily intake (TDI), which is of the same derivation of acceptable daily intakes (ADIs), TWI accounts for contaminants that do not clear the body quickly and may accumulate within the body over a period of time. An example is heavy metals such as arsenic, cadmium, lead, and mercury. The concept of TWI takes into account daily variations in human consumption patterns.
Background
Governments and international organizations such as the Joint FAO/WHO Expert Committee on Food Additives (JECFA), the Joint FAO/WHO Meeting on Pesticide Residues (JMPR), the World Health Organization (WHO) and the Food and Agriculture Organization of the United Nations (FAO) generally use the safety factor approach, based on ADI, to determine intake tolerances for substances that exhibit thresholds for toxicity. The Codex Alimentarius Commission, with the help of independent international risk assessment bodies or ad-hoc consultations organized by FAO and WHO, develops and publishes tolerances based on the best available science. After identifying a substance of concern, researchers and experts then study information the substance's metabolism by humans and animals (as appropriate), the substance's toxicokinetics and toxicodynamics (including carry-over of the toxic substance from feed to edible animal tissue/products); and the substance's acute and long term toxicity in order to determine the acceptability and safety of intake levels of the substance. In comparison to TWI, the Codex maximum level (ML) for a food is the maximum concentration of that substance recommended by the Codex Alimentarius Commission (CAC) to be legally permitted in that commodity.
Data sources
The JECFA makes a distinction between acceptable intakes and tolerable intakes. Tolerable is used to demonstrate permissibility, not acceptability. Substances such as food additives, veterinary drugs, and pesticides that can be controlled in the food supply relatively easily are assessed an acceptable daily intake, or ADI. Other substances considered contaminants are assessed a tolerable daily or weekly intake, TDI or TWI, respectively. Tolerable intakes, whether daily, weekly, or monthly, should not be confused with reference daily intake, or RDI. RDI refers to the amount of a given nutrient individuals should uptake to maintain health.
When determining TWI, appropriate safety factors are applied to allow for extrapolation of human no-observed-adverse-effect levels (NOAELs). NOAELs are established by toxicological studies in animals, and the appropriate safety factors are applied so human NOAELs can be extrapolated. For additives that have an observed effect, and it is not known whether or not the effect is negative, ADI is based on a no observed effect level, or NOEL. To determine the NOEL, researchers primarily use animals (the species most sensitive to the treatments) and carefully select doses until they determine the highest dose that presents an adverse (toxic, but not deadly) effect not observed in lowest dose. When adverse effects predominate, the JMPR bases the ADI on the NOAELs. Uncertainty factors are applied to account for intra- and inter-species differences.
Calculations
Tolerable intake is usually expressed in micrograms or milligrams per kilogram of body weight. Intake (exposure) is determined using the following formula:
Exposure = Σi
(consumption)i x (concentration)i / Body Weight
References
Concentration indicators
Toxicology | Tolerable weekly intake | Environmental_science | 817 |
25,001,172 | https://en.wikipedia.org/wiki/List%20of%20map%20projections | This is a summary of map projections that have articles of their own on Wikipedia or that are otherwise notable. Because there is no limit to the number of possible map projections, there can be no comprehensive list.
Table of projections
*The first known popularizer/user and not necessarily the creator.
Key
Type of projection surface
Cylindrical In normal aspect, these map regularly-spaced meridians to equally spaced vertical lines, and parallels to horizontal lines.
Pseudocylindrical In normal aspect, these map the central meridian and parallels as straight lines. Other meridians are curves (or possibly straight from pole to equator), regularly spaced along parallels.
Conic In normal aspect, conic (or conical) projections map meridians as straight lines, and parallels as arcs of circles.
Pseudoconical In normal aspect, pseudoconical projections represent the central meridian as a straight line, other meridians as complex curves, and parallels as circular arcs.
Azimuthal In standard presentation, azimuthal projections map meridians as straight lines and parallels as complete, concentric circles. They are radially symmetrical. In any presentation (or aspect), they preserve directions from the center point. This means great circles through the central point are represented by straight lines on the map.
Pseudoazimuthal In normal aspect, pseudoazimuthal projections map the equator and central meridian to perpendicular, intersecting straight lines. They map parallels to complex curves bowing away from the equator, and meridians to complex curves bowing in toward the central meridian. Listed here after pseudocylindrical as generally similar to them in shape and purpose.
Other Typically calculated from formula, and not based on a particular projection
Polyhedral maps Polyhedral maps can be folded up into a polyhedral approximation to the sphere, using particular projection to map each face with low distortion.
Properties
Conformal Preserves angles locally, implying that local shapes are not distorted and that local scale is constant in all directions from any chosen point.
Equal-area Area measure is conserved everywhere.
Compromise Neither conformal nor equal-area, but a balance intended to reduce overall distortion.
Equidistant All distances from one (or two) points are correct. Other equidistant properties are mentioned in the notes.
Gnomonic All great circles are straight lines.
Retroazimuthal Direction to a fixed location B (by the shortest route) corresponds to the direction on the map from A to B.
Perspective Can be constructed by light shining through a globe onto a developable surface.
See also
360 video projection
List of national coordinate reference systems
Snake Projection
Notes
Further reading | List of map projections | Mathematics | 524 |
36,392,884 | https://en.wikipedia.org/wiki/USA-177 | USA-177, also known as GPS IIR-11 and GPS SVN-59, is an American navigation satellite which forms part of the Global Positioning System. It was the eleventh Block IIR GPS satellite to be launched, out of thirteen in the original configuration, and twenty one overall. It was built by Lockheed Martin, using the AS-4000 satellite bus.
USA-177 was launched at 17:53:00 UTC on 20 March 2004, atop a Delta II carrier rocket, flight number D303, flying in the 7925-9.5 configuration. The launch took place from Space Launch Complex 17B at the Cape Canaveral Air Force Station, and placed USA-177 into a transfer orbit. The satellite raised itself into medium Earth orbit using a Star-37FM apogee motor.
By 20 May 2004, USA-177 was in an orbit with a perigee of , an apogee of , a period of 718 minutes, and 55 degrees of inclination to the equator. It is used to broadcast the PRN 19 signal, and operates in slot 3 of plane C of the GPS constellation. The satellite has a mass of , and a design life of 10 years. As of 2012 it remains in service.
References
Spacecraft launched in 2004
GPS satellites
USA satellites | USA-177 | Technology | 263 |
68,408,273 | https://en.wikipedia.org/wiki/Nuclearite | Nuclearites are hypothetical objects consisting of nuggets of strange quark matter or a strangelet surrounded by an electron shell, forming an atom-like neutral system, but with masses much larger than a normal atom. These heavy compact particles were first proposed by E. Witten, and the name coined by A. De Rujula and S. L. Glasgow to describe such particles colliding with the Earth's atmosphere, by analogy to more conventional meteorites. It is predicted that nuclearites would travel at hundreds of kilometers per second. Owing to their high energies and mass to size ratio, they should form streaks of light in the lower atmospheric regions. To date, no nuclearites have been successfully observed, but this failure itself places constraints on some theories of dark matter.
Properties of nuclearites
The strangelet forms what is called a nuclearite core, composed primarily of a up, down, and strange quarks, in almost equal proportions. Nuclearites are estimated to have masses between 0.1 and 100 kg. Additionally, they are predicted to be more stable than particles composed of solely up and down quarks. Nuclearites are expected to have a constant matter density. The hypothesized source of these particles are relics from the early universe or the big bang, as well as extreme energetic astrophysical phenomena such as the merger of two quark stars.
Experimental techniques for detection
Nuclearites should in principle be detectable based on their interaction with the Earth's atmosphere, with neutrino telescopes, and in collider experiments. In particular, neutrino telescopes such as ANTARES or Ice Cube are possible detectors for nuclearites.
See also
Strangelet
Cosmic rays
References
Exotic matter
Hypothetical objects | Nuclearite | Physics | 348 |
64,681,276 | https://en.wikipedia.org/wiki/6G | In telecommunications, 6G is the designation for a future technical standard of a sixth-generation technology for wireless communications.
It is the planned successor to 5G (ITU-R IMT-2020), and is currently in the early stages of the standardization process, tracked by the ITU-R as IMT-2030 with the framework and overall objectives defined in recommendation ITU-R M.2160-0. Similar to previous generations of the cellular architecture, standardization bodies such as 3GPP and ETSI, as well as industry groups such as the Next Generation Mobile Networks (NGMN) Alliance, are expected to play a key role in its development.
Numerous companies (Airtel, Anritsu, Apple, Ericsson, Fly, Huawei, Jio, Keysight, LG, Nokia, NTT Docomo, Samsung, Vi, Xiaomi), research institutes (Technology Innovation Institute, the Interuniversity Microelectronics Centre) and countries (United States, United Kingdom, European Union member states, Russia, China, India, Japan, South Korea, Singapore, Saudi Arabia, United Arab Emirates, and Israel) have shown interest in 6G networks, and are expected to contribute to this effort.
6G networks will likely be significantly faster than previous generations, thanks to further improvements in radio interface modulation and coding techniques, as well as physical-layer technologies. Proposals include a ubiquitous connectivity model which could include non-cellular access such as satellite and WiFi, precise location services, and a framework for distributed edge computing supporting more sensor networks, AR/VR and AI workloads. Other goals include network simplification and increased interoperability, lower latency, and energy efficiency. It should enable network operators to adopt flexible decentralized business models for 6G, with local spectrum licensing, spectrum sharing, infrastructure sharing, and intelligent automated management. Some have proposed that machine-learning/AI systems can be leveraged to support these functions.
The NGMN alliance have cautioned that "6G must not inherently trigger a hardware refresh of 5G RAN infrastructure", and that it must "address demonstrable customer needs". This reflects industry sentiment about the cost of the 5G rollout, and concern that certain applications and revenue streams have not lived up to expectations. 6G is expected to begin rolling out in the early 2030s, but given such concerns it is not yet clear which features and improvements will be implemented first.
Expectations
6G networks are expected to be developed and released by the early 2030s. The largest number of 6G patents have been filed in China.
Features
Recent academic publications have been conceptualizing 6G and new features that may be included. Artificial intelligence (AI) is included in many predictions, from 6G supporting AI infrastructure to "AI designing and optimizing 6G architectures, protocols, and operations." Another study in Nature Electronics looks to provide a framework for 6G research stating "We suggest that human-centric mobile communications will still be the most important application of 6G and the 6G network should be human-centric. Thus, high security, secrecy and privacy should be key features of 6G and should be given particular attention by the wireless research community."
Transmission
The frequency bands for 6G are undetermined. Initially, Terahertz was considered an important band for 6G, as indicated by the Institute of Electrical and Electronics Engineers which stated that "Frequencies from 100 GHz to 3 THz are promising bands for the next generation of wireless communication systems because of the wide swaths of unused and unexplored spectrum."
One of the challenges in supporting the required high transmission speeds will be the limitation of energy consumption and associated thermal protection in the electronic circuits.
As of now, mid bands are being considered by WRC for 6G/IMT-2030.
Terahertz and millimeter wave progress
Millimeter waves (30 to 300 GHz) and terahertz radiation (300 to 3,000 GHz) might, according to some speculations, be used in 6G. However, the wave propagation of these frequencies is much more sensitive to obstacles than the microwave frequencies (about 2 to 30 GHz) used in 5G and Wi-Fi, which are more sensitive than the radio waves used in 1G, 2G, 3G and 4G. Therefore, there are concerns those frequencies may not be commercially viable, especially considering that 5G mmWave deployments are very limited due to deployment costs.
In October 2020, the Alliance for Telecommunications Industry Solutions (ATIS) launched a "Next G Alliance", an alliance consisting of AT&T, Ericsson, Telus, Verizon, T-Mobile, Microsoft, Samsung, and others that "will advance North American mobile technology leadership in 6G and beyond over the next decade."
In January 2022, Purple Mountain Laboratories of China claimed that its research team had achieved a world record of 206.25 gigabits per second (Gbit/s) data rate for the first time in a lab environment within the terahertz frequency band, which is supposed to be the base of 6G cellular technology.
In February 2022, Chinese researchers stated that they had achieved a record data streaming speed using vortex millimetre waves, a form of extremely high-frequency radio wave with rapidly changing spins, the researchers transmitted 1 terabyte of data over a distance of 1 km (3,300 feet) in a second. The spinning potential of radio waves was first reported by British physicist John Henry Poynting in 1909, but making use of it proved to be difficult. Zhang and colleagues said their breakthrough was built on the hard work of many research teams across the globe over the past few decades. Researchers in Europe conducted the earliest communication experiments using vortex waves in the 1990s. A major challenge is that the size of the spinning waves increases with distance, and the weakening signal makes high-speed data transmission difficult. The Chinese team built a unique transmitter to generate a more focused vortex beam, making the waves spin in three different modes to carry more information, and developed a high-performance receiving device that could pick up and decode a huge amount of data in a split second.
In 2023, Nagoya University in Japan reported successful fabrication of three-dimensional wave guides with niobium metal,
a superconducting material that minimizes attenuation due to absorption and radiation, for transmission of waves in the frequency band, deemed useful in 6G networking.
Test satellites
On November 6, 2020, China launched a Long March 6 rocket with a payload of thirteen satellites into orbit. One of the satellites reportedly served as an experimental testbed for 6G technology, which was described as "the world's first 6G satellite."
Geopolitics
During rollout of 5G, China banned Ericsson in favour of Chinese suppliers, primarily Huawei and ZTE. Huawei and ZTE were banned in many Western countries over concerns of spying. This creates a risk of 6G network fragmentation. Many power struggles are expected during the development of common standards. In February 2024, the U.S., Australia, Canada, the Czech Republic, Finland, France, Japan, South Korea, Sweden and the U.K. released a joint statement stating that they support a set of shared principles for 6G for "open, free, global, interoperable, reliable, resilient, and secure connectivity."
6G is considered a key technology for economic competitiveness, national security, and the functioning of society. It is a national priority in many countries and is named as priority in China's Fourteenth five-year plan.
Many countries are favouring the OpenRAN approach, where different suppliers can be integrated together and hardware and software are independent of supplier.
References
External links
Mobile telecommunications
Internet of things
Data centers
Wireless communication systems
Technology forecasting | 6G | Technology | 1,611 |
70,401,497 | https://en.wikipedia.org/wiki/HD%20171819 | HD 171819, also known as HR 6986 or rarely 22 G. Telescopii, is a solitary star located in the southern constellation Telescopium. It is faintly visible to the naked eye as a white-hued object with an apparent magnitude of 5.84. The object is located relatively close at a distance of 313 light years based on Gaia DR3 parallax measurements, but it is approaching the Solar System with a heliocentric radial velocity of . At its current distance, HD 171819's brightness is diminished by one-quarter of a magnitude due to interstellar dust and it has an absolute magnitude of +0.65.
HD 171819 has a stellar classification of A7 IV/V, indicating that the object is a late A-type star with the blended luminosity class of a main sequence star and subgiant. However, astronomer William Buscombe gave it a class of A3 V, instead making it an ordinary A-type main-sequence star. Evolutionary models give it an age of 855 million years and place it towards the end of its main-sequence life. At present it has 1.73 times the mass of the Sun and a slightly enlarged radius 3.37 times that of the Sun. It radiates 33.3 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 171819 has a near solar metallicity at [Fe/H] = −0.02.
References
A-type main-sequence stars
A-type subgiants
Telescopium
Telescopii, 22
CD-48 12644
171819
091461
6986 | HD 171819 | Astronomy | 346 |
23,190,613 | https://en.wikipedia.org/wiki/Hurwitz%27s%20theorem%20%28number%20theory%29 | In number theory, Hurwitz's theorem, named after Adolf Hurwitz, gives a bound on a Diophantine approximation. The theorem states that for every irrational number ξ there are infinitely many relatively prime integers m, n such that
The condition that ξ is irrational cannot be omitted. Moreover the constant is the best possible; if we replace by any number and we let (the golden ratio) then there exist only finitely many relatively prime integers m, n such that the formula above holds.
The theorem is equivalent to the claim that the Markov constant of every number is larger than .
See also
Dirichlet's approximation theorem
Lagrange number
References
Diophantine approximation
Theorems in number theory | Hurwitz's theorem (number theory) | Mathematics | 147 |
14,221,614 | https://en.wikipedia.org/wiki/Topological%20index | In the fields of chemical graph theory, molecular topology, and mathematical chemistry, a topological index, also known as a connectivity index, is a type of a molecular descriptor that is calculated based on the molecular graph of a chemical compound. Topological indices are numerical parameters of a graph which characterize its topology and are usually graph invariant. Topological indices are used for example in the development of quantitative structure-activity relationships (QSARs) in which the biological activity or other properties of molecules are correlated with their chemical structure.
Calculation
Topological descriptors are derived from hydrogen-suppressed molecular graphs, in which the atoms are represented by vertices and the bonds by edges. The connections between the atoms can be described by various types of topological matrices (e.g., distance or adjacency matrices), which can be mathematically manipulated so as to derive a single number, usually known as graph invariant, graph-theoretical index or topological index. As a result, the topological index can be defined as two-dimensional descriptors that can be easily calculated from the molecular graphs, and do not depend on the way the graph is depicted or labeled and no need of energy minimization of the chemical structure.
Types
The simplest topological indices do not recognize double bonds and atom types (C, N, O etc.) and ignore hydrogen atoms ("hydrogen suppressed") and defined for connected undirected molecular graphs only. More sophisticated topological indices also take into account the hybridization state of each of the atoms contained in the molecule. The Hosoya index is the first topological index recognized in chemical graph theory, and it is often referred to as "the" topological index. Other examples include the Wiener index, Randić's molecular connectivity index, Balaban’s J index, and the TAU descriptors. The extended topochemical atom (ETA) indices have been developed based on refinement of TAU descriptors.
Global and local indices
Hosoya index and Wiener index are global (integral) indices to describe entire molecule, Bonchev and Polansky introduced local (differential) index for every atom in a molecule. Another examples of local indices are modifications of Hosoya index.
Discrimination capability and superindices
A topological index may have the same value for a subset of different molecular graphs, i.e. the index is unable to discriminate the graphs from this subset. The discrimination capability is very important characteristic of topological index. To increase the discrimination capability a few topological indices may be combined to superindex.
Computational complexity
Computational complexity is another important characteristic of topological index. The Wiener index, Randic's molecular connectivity index, Balaban's J index may be calculated by fast algorithms, in contrast to Hosoya index and its modifications for which non-exponential algorithms are unknown.
List of topological indices
Wiener index
Hosoya index
Hyper-Wiener index
Estrada index
Randić index
Zagreb indics
Szeged index
Padmakar–Ivan index
Gutman index
sombor index
Harmonic index
Arithmetic index
Atom bond connectivity index
Merrifield-Simmons index
Application
QSAR
QSARs represent predictive models derived from application of statistical tools correlating biological activity (including desirable therapeutic effect and undesirable side effects) of chemicals (drugs/toxicants/environmental pollutants) with descriptors representative of molecular structure and/or properties. QSARs are being applied in many disciplines for example risk assessment, toxicity prediction, and regulatory decisions in addition to drug discovery and lead optimization.
For example, ETA indices have been applied in the development of predictive QSAR/QSPR/QSTR models.
References
Further reading
External links
Software for calculating various topological indices: GraphTea.
Theoretical chemistry
Mathematical chemistry
Graph invariants
Cheminformatics | Topological index | Chemistry,Mathematics | 763 |
774,811 | https://en.wikipedia.org/wiki/Pedersen%20device | The Pedersen device was an experimental weapon attachment for the M1903 Springfield bolt action rifle that allowed it to fire a .30 caliber (7.62 mm) pistol-type cartridge in semi-automatic fire mode. The attachment was developed to allow an infantryman to convert "their rifle to a form of submachine gun or automatic rifle" in approximately 15 seconds.
Production of the Pedersen device and modified M1903 rifles started in 1918. However, World War I ended before they could be fielded. The contract was cancelled on March 1, 1919, after production of 65,000 devices, 1.6 million magazines, 65 million cartridges, and 101,775 modified Springfield rifles.
The devices, magazines, ammunition and rifles were subsequently placed in storage, and declared surplus in 1931. When the United States Army decided they did not want to pay the cost of storing the devices, nearly all of the stored devices were destroyed except for a few examples kept by the Ordnance Department. Fewer than 100 Pedersen devices escaped ordered destruction to become extremely rare collectors' items.
History
Prior to the United States' entry into World War I, John Pedersen, a longtime employee of Remington Arms, developed the Pedersen device. His idea was to dramatically increase the firepower available to the average infantryman. His final design replaced the bolt of a modified Springfield M1903 rifle with a device consisting of a complete firing mechanism and a small "barrel" for a new .30 caliber pistol like cartridge.
In effect, the "device" was essentially a complete blowback pistol minus a receiver-grip using the short "barrel" of the device to fit into the longer chamber of the M1903 rifle. The mechanism was fed by a long 40-round magazine sticking perpendicularly out of the rifle at a 45-degree angle to the top right, and could be reloaded by inserting a new magazine. Each magazine had cut-out viewing slots facing aft so the rifleman could observe the number of unfired rounds remaining. The system required an ejection port to be cut into the left side of the M1903 rifle's receiver and the adjacent stock cut away to allow clearance for spent cartridges being thrown from the action. The sear, trigger, and magazine cut-off also required modifications which did not limit the ability of Mark I receivers to function in the normal bolt-action mode.
Pedersen traveled to Washington, D.C. on 8 October 1917 to conduct a secret demonstration for Chief of Ordnance General William Crozier and a selected group of army officers and congressmen. After firing several rounds from what appeared to be an unmodified Springfield, he removed the standard bolt, inserted the device, and fired several magazines at a very high rate of fire. The evaluation team was favorably impressed. To deceive the enemy, the Ordnance Department decided to call it the US Automatic Pistol, Caliber .30, Model of 1918. Plans were put into place to start production of modified Springfields, which became the US Rifle, Cal. .30, Model of M1903, Mark I. The Army placed orders for 133,450 devices and 800,000,000 cartridges for the 1919 Spring Offensive. General John J. Pershing requested 40 magazines and 5000 rounds of ammunition be shipped with each device and anticipated an average daily ammunition use of 100 rounds per device. The use of the Pedersen device in the 1919 spring offensive was to be in conjunction with the full combat introduction of the M1918 Browning Automatic Rifle (BAR.)
The US Patent Office issued , , , and to Pedersen for his invention. The United States Army paid Pedersen $50,000 for rights to produce the device and a royalty of 50 cents for each device manufactured. The Army paid for all necessary machinery required to manufacture the device, and Remington received a net profit of two dollars for each device and 3 cents for each magazine.
A Mark II Pedersen Device was also designed for the M1917 "American Enfield" and a similar prototype was made for the Remington-produced Mosin–Nagant; neither of those were ever put into production.
Production
Production of the device started in 1918, along with the modified rifle that December, after the war ended. The contract was cancelled on 1 March 1919 after production of 65,000 devices with 1.6 million magazines, 65 million cartridges and 101,775 modified Springfield rifles. Each device was to be issued with a belt including a stamped, sheet-steel scabbard for safely carrying the device when not in use, a canvas pouch to hold the M1903 rifle bolt when not in use, and canvas pouches holding five magazines. The device with two pouches of loaded magazines added 14 pounds to the infantryman's standard load.
Remington subcontracted magazine production to Mount Vernon Silversmiths, and the carrying scabbards were manufactured by Gorham Manufacturing Company. Canvas pouches for magazines and for the rifle bolt were manufactured at Rock Island Arsenal.
Ammunition was packaged in 40-round boxes sufficient to fill one magazine. Five boxes were packed in a carton corresponding to the five-magazine pouches, and three cartons were carried in a light canvas bandolier holding 600 cartridges. Five bandoliers were packed in a wooden crate. Ammunition produced by Remington is headstamped "RA" (or "RAH" for the Hoboken, New Jersey plant) with the years (19-) "18", "19", and "20".
Post-war
After the war, the semi-automatic concept started to gain currency in the U.S. Army. By the late 1920s, the Army was experimenting with several new semi-automatic rifle designs, including the Pedersen rifle firing a new .276 (7 mm) rifle cartridge. However, the Pedersen rifle lost to a new semi-automatic rifle designed by John C. Garand. The Garand was originally developed for .30-06 cartridge and converted to the new .276 cartridge. After the .276 Garand rifle was selected over the Pedersen rifle, General Douglas MacArthur came out against changing rifle cartridges, since the Army had vast stockpiles of .30–06 ammunition left over from World War I, the .30-06 would have to be retained for machine gun use, and one cartridge simplified wartime logistics. Garand reverted his design back to the standard .30-06 Springfield cartridge in 1932; the result became the M1 Garand.
The Pedersen device was declared surplus in 1931, five years before the Garand had even started serial production. Mark I rifles were altered to M1903 standard in 1937 (except for, curiously, an ejection slot that remained in the receiver side wall) and were used alongside standard M1903 and M1903A1 Springfields. Nearly all of the stored devices were destroyed by the Army except for a few Ordnance Department examples, when it was decided they did not want to pay the cost of storing. They were burned in a large bonfire, though some were taken during the process. Following their destruction, noted writer Julian Hatcher wrote an authoritative article for the May 1932 issue of American Rifleman magazine describing the device in detail.
See also
7.65mm Longue
Remington Model 14
Remington Model 51
References
External links
NRA
Civilian Marksmanship Program, complete description of Pedersen device and history
Remington Society
Auction Press Release
Firearm components
Trial and research firearms
.32 Longue firearms | Pedersen device | Technology | 1,496 |
44,965,599 | https://en.wikipedia.org/wiki/Eoxin%20D4 | {{DISPLAYTITLE:Eoxin D4}}
Eoxin D4 (EXD4), also known as 14,15-leukotriene D4, is an eoxin. Cells make eoxins by metabolizing arachidonic acid with a 15-lipoxygenase enzyme to form 15(S)-hydroperoxyeicosapentaenoic acid (i.e. 15(S)-HpETE). This product is then converted serially to EXA4, EXC4, EXD4, and EXE4 by LTC4 synthase, an unidentified gamma-glutamyltransferase, and an unidentified dipeptidase, respectively, in a pathway which appears similar if not identical to the pathway which forms leukotreines, i.e. LTA4, LTC4, LTD4, and LTE4. This pathway is schematically shown as follows:
EXA4 is viewed as an intracellular-bound, short-lived intermediate which is rapidly metabolized to the downstream eoxins. The eoxins downstream of EXA4 are secreted from their parent cells and, it is proposed but not yet proven, serve to regulate allergic responses and the development of certain cancers (see eoxins).
References
Eicosanoids | Eoxin D4 | Chemistry,Biology | 284 |
985,793 | https://en.wikipedia.org/wiki/Vacuum%20extraction | Vacuum extraction (VE), also known as ventouse, is a method to assist delivery of a baby using a vacuum device. It is used in the second stage of labor if it has not progressed adequately. It may be an alternative to a forceps delivery and caesarean section. It cannot be used when the baby is in the breech position or for premature births. The use of VE is generally safe, but it can occasionally have negative effects on either the mother or the child. The term ventouse comes from the French word for "suction cup".
Medical uses
There are several indications to use a vacuum extraction to aid delivery:
Maternal exhaustion
Prolonged second stage of labor
Foetal distress in the second stage of labor, generally indicated by changes in the foetal heart-rate (usually measured on a CTG)
Maternal illness where prolonged "bearing down" or pushing efforts would be risky (e.g. cardiac conditions, blood pressure, aneurysm, glaucoma). If these conditions are known about before the birth, or are severe, then an elective caesarean section may be performed.
Technique
The woman is placed in the lithotomy position and assists throughout the process by pushing. A suction cup is placed onto the head of the baby and the suction draws the skin from the scalp into the cup. Correct placement of the cup directly over the flexion point, about 3 cm anterior from the occipital (posterior) fontanelle, is critical to the success of a vacuum extraction. Ventouse devices have handles to allow for traction. When the baby's head is delivered, the device is detached, allowing the birthing attendant and the mother to complete the delivery of the baby.
For proper use of the ventouse, the maternal cervix has to be fully dilated, the head engaged in the birth canal, and the head position known. Preferably the operator of the vacuum extractor needs to be experienced in order to safely perform the procedure. The baby should not be preterm, previously exposed to scalp sampling or failed forceps delivery. If the ventouse attempt fails, it may be necessary to deliver the infant by forceps or caesarean section.
History
In 1849 the Edinburgh professor of obstetrics James Young Simpson, subsequently known for pioneering the use of chloroform in childbirth, designed the Air Tractor which consisted of a metal syringe attached to a soft rubber cup. This was the earliest known vacuum extractor to assist childbirth but it did not become popular. Swedish professor Tage Malmstrom developed the ventouse, or Malmstrom extractor in the 1950s. Originally made with a metal cap, new materials such as plastics and siliconised rubber have improved the design so that it is now used more than forceps.
Vacuum delivery as a percentage of vaginal births vary depending on location. In the USA they comprise about 10% to 15% of vaginal births while in Italy 4.8% of vaginal births were delivered via vacuum in 2013.
Comparisons to other forms of assisted delivery
Positive aspects
An episiotomy may not be required.
The mother still takes an active role in the birth.
No special anesthesia is required.
There is less potential for maternal trauma compared to forceps and caesarean section.
Negative aspects
The baby will be left with a temporary lump on its head, known as a chignon.
There is a possibility of cephalohematoma formation, or subgaleal hemorrhage which can be life-threatening.
There is a higher risk of failure to deliver the baby than with forceps, and an increased likelihood of perineal trauma.
See also
Odón device
References
Childbirth
Medical equipment
Obstetrical procedures | Vacuum extraction | Biology | 759 |
67,469,176 | https://en.wikipedia.org/wiki/Reverse%20complement%20polymerase%20chain%20reaction | Reverse complement polymerase chain reaction (RC-PCR) is a modification of the polymerase chain reaction (PCR). It is primarily used to generate amplicon libraries for DNA sequencing by next generation sequencing (NGS). The technique permits both the amplification and the ability to append sequences or functional domains of choice independently to either end of the generated amplicons in a single closed tube reaction. RC-PCR was invented in 2013 by Daniel Ward and Christopher Mattocks at Salisbury NHS Foundation Trust, UK.
Principles
In RC-PCR, no target specific primers are present in the reaction mixture. Instead target specific primers are formed as the reaction proceeds. A typical reaction employing the approach requires four oligonucleotides. The oligonucleotides interact with each other in pairs; one oligonucleotide probe and one universal primer (containing functional domains of choice), which hybridize with each other at their 3’ ends. Once hybridized, the universal primer can be extended, using the oligonucleotide probe as the template, to yield fully formed, target specific primers, which are then available to amplify the template in subsequent rounds of thermal cycling as per a standard PCR reaction.
The oligonucleotide probe may also be blocked at the 3’ end preventing equivalent extension of the probe, but this is not essential. The probe is not consumed; it is available to act as a template for the universal primer to be ‘converted’ into target specific primer throughout successive PCR cycles. This generation of target specific primer occurs in parallel with standard PCR amplification under standard PCR conditions.
Advantages
RC-PCR provides significant advantages over other methods of amplicon library preparation methods. Most significantly it is a single closed tube reaction, this eliminates cross contamination associated with other two-step PCR approaches as well as utilising less reagent and requiring less labour to perform.
The technique also provides the significant advantage of the flexibility of appending any desired sequence or functional domain of choice to either end of any amplicon. This is currently most advantageous in modern next generation sequencing (NGS) laboratories where a single target specific probe pair can be used with a whole library of universal primers. This benefit is used with NGS applications to apply sample specific indexes independently to each end of the amplicon construct. A Laboratory employing this approach would only require a single set of index primers, which can be used with all target specific probes compatible with that index set. This significantly reduces the number and length of oligonucleotides required by the laboratory compared to using full length pre-synthesised indexed target specific primers.
The generation of the target specific primer in the reaction as it progresses also leads to more balanced reaction components. Concentrations of target specific primer are more aligned with target molecule concentration thereby reducing the potential of both off target priming and primer dimerisation.
Variations
Multiplex RC-PCR – where two or more universal primer probe sets are present in the reaction mixture to amplify two or more targets simultaneously.
RT-RC-PCR – This modification is used when the template material supplied in the reaction is RNA rather than DNA. In this modification the reaction mixture also contains reverse transcriptase enzymes and reverse transcription primers as well as the universal primers and Reverse complement probes of the method. This approach permits reverse transcription of the provided RNA template, the formation of tailed target specific primers and the amplification of the desired targets in a single closed tube reaction.
Single ended RC-PCR – This variation of the method is used when only one complementary universal primer probe pair is provided in the reaction to generate one target specific primer. The other target specific primer is provided as a traditional primer as per standard PCR.
History
Following the invention of RC-PCR in 2013 the technique was clinically validated and employed diagnostically for a range of both inherited diseases such as hemochromatosis and thrombophilia as well as somatically acquired disorders including Myeloproliferative neoplasms and Acute myeloid leukemia in the Wessex Regional Genetics Laboratory (WRGL), Salisbury UK. More recently work has been undertaken to utilise the technology in the fight against the SARS-CoV-2 pandemic.
The patent application was filed in the UK in 2015 and awarded in 2020. Patent applications have been filed in other jurisdictions worldwide and are currently pending.
In May 2019 the Intellectual property was licensed to Nimagen B.V. to develop, manufacture and market kits exploiting the technology. Currently commercially available kits employing the technology include those for Human identification and for the whole genome sequencing of the SARS-CoV-2 virus for variant identification, tracking and treatment response. In August 2022 Nimagen officially launched a range of products employing the RC-PCR technology for human forensics applications under the trademark IDseek®. The Short Tandem Repeat version of the kit is validated by the Netherlands Forensic Institute as an improved method for routine massively parallel sequencing of short tandem repeats.
The RC-PCR approach is becoming more widely used for human health and several CE IVD kits are available for human clinical diagnostics including BRCA, TP53, PALB2 and CFTR analysis. The technique has also been proven as a useful and powerful tool in the identification of the causative infectious pathogen in patients suspected of having a bacterial infection, in this setting it has been shown to provide a significant increase in the number of clinical samples in which a potentially clinically relevant pathogen is identified compared to the commonly used 16S Sanger method. It has also been shown to provide similar advantages over traditional methods in the deconvolution of microbial communities in environmental samples.
References
External links
RC-PCR animation
WIPO patent filing information page
Polymerase chain reaction
SARS-CoV-2
DNA sequencing methods
Molecular biology techniques
DNA profiling techniques
Laboratory techniques
Amplifiers
British inventions | Reverse complement polymerase chain reaction | Chemistry,Technology,Biology | 1,241 |
70,936,440 | https://en.wikipedia.org/wiki/Henry%20Johnston%20Scott%20Matthew | Henry Johnston Scott Matthew FRCPE (22 March 1914 – 7 April 1997) was a Scottish physician and toxicologist in charge of the Regional Poisoning Treatment Centre from 1964 and Director of the Scottish Poisons Information Bureau from 1965. Matthew changed his career path, concentrating on Toxicology in 1957 and was known as the Father of Clinical Toxicology.
Education and early career
Matthew was born in Edinburgh in 1914. He went to the University of Edinburgh to study Medicine after schooling at Edinburgh Academy. Matthew graduated from the University of Edinburgh in the top five of the final examinations of 1937. His career in surgery began at the Royal Infirmary of Edinburgh and Great Ormond Street Hospital, London; however, this didn't last long because of the outbreak of World War II. During the war, he served with the Royal Army Medical Corps in the Middle East and Persia. Following the war, after a brief break, he returned to Scotland and went into general practice, joining the faculty of Medicine as a consultant physician specializing in Cardiology at Edinburgh Royal Infirmary in 1945. In 1951 he was elected a member of the Harveian Society of Edinburgh.
Career change and research
Matthew had to convert his specialization because of the crucial decision made by the manager of the Royal Infirmary, designating the ward for incidental delirium (Ward 3) a Regional Poisoning Treatment Centre (RPTC) in response to a report issued by the Ministry of Health in England and the Scottish Home and Health Department. In 1964, He officially started his long-term research and clinical practice of Toxicology. Regarding the turning point of his motivation in concentrating on Toxicology, he noticed that patients were prescribed and overdosed on barbiturates as a remedy, with which he strongly disagreed. Matthew reassessed and improved the role of gastric lavage and aspiration in respect of barbiturates overdose, which was abandoned in Denmark in 1946; the research has become a recommended reference among the clinical staff widely.
Later life
Matthew retired in 1975 and suffered from prostate cancer in his later life.
Works
Acute Barbiturate Poisoning, edited by Henry Matthew (Excerpta Medica, 1971)
Treatment of common acute poisonings, edited by Henry Matthew and Alexander A. H. Lawson (Longman, 1975)
References
1914 births
1997 deaths
Alumni of the University of Edinburgh
Academics from Edinburgh
British toxicologists
Toxicology
Members of the Harveian Society of Edinburgh
People educated at Edinburgh Academy | Henry Johnston Scott Matthew | Environmental_science | 492 |
928,060 | https://en.wikipedia.org/wiki/Jet%20bundle | In differential topology, the jet bundle is a certain construction that makes a new smooth fiber bundle out of a given smooth fiber bundle. It makes it possible to write differential equations on sections of a fiber bundle in an invariant form. Jets may also be seen as the coordinate free versions of Taylor expansions.
Historically, jet bundles are attributed to Charles Ehresmann, and were an advance on the method (prolongation) of Élie Cartan, of dealing geometrically with higher derivatives, by imposing differential form conditions on newly introduced formal variables. Jet bundles are sometimes called sprays, although sprays usually refer more specifically to the associated vector field induced on the corresponding bundle (e.g., the geodesic spray on Finsler manifolds.)
Since the early 1980s, jet bundles have appeared as a concise way to describe phenomena associated with the derivatives of maps, particularly those associated with the calculus of variations. Consequently, the jet bundle is now recognized as the correct domain for a geometrical covariant field theory and much work is done in general relativistic formulations of fields using this approach.
Jets
Suppose M is an m-dimensional manifold and that (E, π, M) is a fiber bundle. For p ∈ M, let Γ(p) denote the set of all local sections whose domain contains p. Let be a multi-index (an m-tuple of non-negative integers, not necessarily in ascending order), then define:
Define the local sections σ, η ∈ Γ(p) to have the same r-jet at p if
The relation that two maps have the same r-jet is an equivalence relation. An r-jet is an equivalence class under this relation, and the r-jet with representative σ is denoted . The integer r is also called the order of the jet, p is its source and σ(p) is its target.
Jet manifolds
The r-th jet manifold of π is the set
We may define projections πr and πr,0 called the source and target projections respectively, by
If 1 ≤ k ≤ r, then the k-jet projection is the function πr,k defined by
From this definition, it is clear that πr = π o πr,0 and that if 0 ≤ m ≤ k, then πr,m = πk,m o πr,k. It is conventional to regard πr,r as the identity map on J r(π) and to identify J 0(π) with E.
The functions πr,k, πr,0 and πr are smooth surjective submersions.
A coordinate system on E will generate a coordinate system on J r(π). Let (U, u) be an adapted coordinate chart on E, where u = (xi, uα). The induced coordinate chart (Ur, ur) on J r(π) is defined by
where
and the functions known as the derivative coordinates:
Given an atlas of adapted charts (U, u) on E, the corresponding collection of charts (U r, u r) is a finite-dimensional C∞ atlas on J r(π).
Jet bundles
Since the atlas on each defines a manifold, the triples , and all define fibered manifolds. In particular, if is a fiber bundle, the triple defines the r-th jet bundle of π.
If W ⊂ M is an open submanifold, then
If p ∈ M, then the fiber is denoted .
Let σ be a local section of π with domain W ⊂ M. The r-th jet prolongation of σ is the map defined by
Note that , so really is a section. In local coordinates, is given by
We identify with .
Algebro-geometric perspective
An independently motivated construction of the sheaf of sections is given.Consider a diagonal map , where the smooth manifold is a locally ringed space by for each open . Let be the ideal sheaf of , equivalently let be the sheaf of smooth germs which vanish on for all . The pullback of the quotient sheaf from to by is the sheaf of k-jets.
The direct limit of the sequence of injections given by the canonical inclusions of sheaves, gives rise to the infinite jet sheaf . Observe that by the direct limit construction it is a filtered ring.
Example
If π is the trivial bundle (M × R, pr1, M), then there is a canonical diffeomorphism between the first jet bundle and T*M × R. To construct this diffeomorphism, for each σ in write .
Then, whenever p ∈ MConsequently, the mapping
is well-defined and is clearly injective. Writing it out in coordinates shows that it is a diffeomorphism, because if (xi, u) are coordinates on M × R, where u = idR is the identity coordinate, then the derivative coordinates ui on J1(π) correspond to the coordinates ∂i on T*M.
Likewise, if π is the trivial bundle (R × M, pr1, R), then there exists a canonical diffeomorphism between and R × TM.
Contact structure
The space Jr(π) carries a natural distribution, that is, a sub-bundle of the tangent bundle TJr(π)), called the Cartan distribution. The Cartan distribution is spanned by all tangent planes to graphs of holonomic sections; that is, sections of the form jrφ for φ a section of π.
The annihilator of the Cartan distribution is a space of differential one-forms called contact forms, on Jr(π). The space of differential one-forms on Jr(π) is denoted by and the space of contact forms is denoted by . A one form is a contact form provided its pullback along every prolongation is zero. In other words, is a contact form if and only if
for all local sections σ of π over M.
The Cartan distribution is the main geometrical structure on jet spaces and plays an important role in the geometric theory of partial differential equations. The Cartan distributions are completely non-integrable. In particular, they are not involutive. The dimension of the Cartan distribution grows with the order of the jet space. However, on the space of infinite jets J∞ the Cartan distribution becomes involutive and finite-dimensional: its dimension coincides with the dimension of the base manifold M.
Example
Consider the case (E, π, M), where E ≃ R2 and M ≃ R. Then, (J1(π), π, M) defines the first jet bundle, and may be coordinated by (x, u, u1), where
for all p ∈ M and σ in Γp(π). A general 1-form on J1(π) takes the form
A section σ in Γp(π) has first prolongation
Hence, (j1σ)*θ can be calculated as
This will vanish for all sections σ if and only if c = 0 and a = −bσ′(x). Hence, θ = b(x, u, u1)θ0 must necessarily be a multiple of the basic contact form θ0 = du − u1dx. Proceeding to the second jet space J2(π) with additional coordinate u2, such that
a general 1-form has the construction
This is a contact form if and only if
which implies that e = 0 and a = −bσ′(x) − cσ′′(x). Therefore, θ is a contact form if and only if
where θ1 = du1 − u2dx is the next basic contact form (Note that here we are identifying the form θ0 with its pull-back to J2(π)).
In general, providing x, u ∈ R, a contact form on Jr+1(π) can be written as a linear combination of the basic contact forms
where
Similar arguments lead to a complete characterization of all contact forms.
In local coordinates, every contact one-form on Jr+1(π) can be written as a linear combination
with smooth coefficients of the basic contact forms|I| is known as the order of the contact form . Note that contact forms on Jr+1(π) have orders at most r. Contact forms provide a characterization of those local sections of πr+1 which are prolongations of sections of π.
Let ψ ∈ ΓW(πr+1), then ψ = jr+1σ where σ ∈ ΓW(π) if and only if
Vector fields
A general vector field on the total space E, coordinated by , is
A vector field is called horizontal, meaning that all the vertical coefficients vanish, if = 0.
A vector field is called vertical, meaning that all the horizontal coefficients vanish, if ρi = 0.
For fixed (x, u), we identify
having coordinates (x, u, ρi, φα), with an element in the fiber TxuE of TE over (x, u) in E, called a tangent vector in TE. A section
is called a vector field on E with
and ψ in Γ(TE).
The jet bundle Jr(π) is coordinated by . For fixed (x, u, w), identify
having coordinates
with an element in the fiber of TJr(π) over (x, u, w) ∈ Jr(π), called a tangent vector in TJr(π). Here,
are real-valued functions on Jr(π). A section
is a vector field on Jr(π), and we say
Partial differential equations
Let (E, π, M) be a fiber bundle. An r-th order partial differential equation on π is a closed embedded submanifold S of the jet manifold Jr(π). A solution is a local section σ ∈ ΓW(π) satisfying , for all p in M.
Consider an example of a first order partial differential equation.
Example
Let π be the trivial bundle (R2 × R, pr1, R2) with global coordinates (x1, x2, u1). Then the map F : J1(π) → R defined by
gives rise to the differential equation
which can be written
The particular
has first prolongation given by
and is a solution of this differential equation, because
and so for every p ∈ R2.
Jet prolongation
A local diffeomorphism ψ : Jr(π) → Jr(π) defines a contact transformation of order r if it preserves the contact ideal, meaning that if θ is any contact form on Jr(π), then ψ*θ is also a contact form.
The flow generated by a vector field Vr on the jet space Jr(π) forms a one-parameter group of contact transformations if and only if the Lie derivative of any contact form θ preserves the contact ideal.
Let us begin with the first order case. Consider a general vector field V1 on J1(π), given by
We now apply to the basic contact forms and expand the exterior derivative of the functions in terms of their coordinates to obtain:
Therefore, V1 determines a contact transformation if and only if the coefficients of dxi and in the formula vanish. The latter requirements imply the contact conditions
The former requirements provide explicit formulae for the coefficients of the first derivative terms in V1:
where
denotes the zeroth order truncation of the total derivative Di.
Thus, the contact conditions uniquely prescribe the prolongation of any point or contact vector field. That is, if satisfies these equations, Vr is called the r-th prolongation of V to a vector field on Jr(π).
These results are best understood when applied to a particular example. Hence, let us examine the following.
Example
Consider the case (E, π, M), where E ≅ R2 and M ≃ R. Then, (J1(π), π, E) defines the first jet bundle, and may be coordinated by (x, u, u1), where
for all p ∈ M and σ in Γp(π). A contact form on J1(π) has the form
Consider a vector V on E, having the form
Then, the first prolongation of this vector field to J1(π) is
If we now take the Lie derivative of the contact form with respect to this prolonged vector field, we obtain
Hence, for preservation of the contact ideal, we require
And so the first prolongation of V to a vector field on J1(π) is
Let us also calculate the second prolongation of V to a vector field on J2(π). We have as coordinates on J2(π). Hence, the prolonged vector has the form
The contact forms are
To preserve the contact ideal, we require
Now, θ has no u2 dependency. Hence, from this equation we will pick up the formula for ρ, which will necessarily be the same result as we found for V1. Therefore, the problem is analogous to prolonging the vector field V1 to J2(π). That is to say, we may generate the r-th prolongation of a vector field by recursively applying the Lie derivative of the contact forms with respect to the prolonged vector fields, r times. So, we have
and so
Therefore, the Lie derivative of the second contact form with respect to V2 is
Hence, for to preserve the contact ideal, we require
And so the second prolongation of V to a vector field on J2(π) is
Note that the first prolongation of V can be recovered by omitting the second derivative terms in V2, or by projecting back to J1(π).
Infinite jet spaces
The inverse limit of the sequence of projections gives rise to the infinite jet space J∞(π). A point is the equivalence class of sections of π that have the same k-jet in p as σ for all values of k. The natural projection π∞ maps into p.
Just by thinking in terms of coordinates, J∞(π) appears to be an infinite-dimensional geometric object. In fact, the simplest way of introducing a differentiable structure on J∞(π), not relying on differentiable charts, is given by the differential calculus over commutative algebras. Dual to the sequence of projections of manifolds is the sequence of injections of commutative algebras. Let's denote simply by . Take now the direct limit of the 's. It will be a commutative algebra, which can be assumed to be the smooth functions algebra over the geometric object J∞(π). Observe that , being born as a direct limit, carries an additional structure: it is a filtered commutative algebra.
Roughly speaking, a concrete element will always belong to some , so it is a smooth function on the finite-dimensional manifold Jk(π) in the usual sense.
Infinitely prolonged PDEs
Given a k-th order system of PDEs E ⊆ Jk(π), the collection I(E) of vanishing on E smooth functions on J∞(π) is an ideal in the algebra , and hence in the direct limit too.
Enhance I(E) by adding all the possible compositions of total derivatives applied to all its elements. This way we get a new ideal I of which is now closed under the operation of taking total derivative. The submanifold E(∞) of J∞(π) cut out by I is called the infinite prolongation of E.
Geometrically, E(∞) is the manifold of formal solutions of E. A point of E(∞) can be easily seen to be represented by a section σ whose k-jet's graph is tangent to E at the point with arbitrarily high order of tangency.
Analytically, if E is given by φ = 0, a formal solution can be understood as the set of Taylor coefficients of a section σ in a point p that make vanish the Taylor series of at the point p.
Most importantly, the closure properties of I imply that E(∞) is tangent to the infinite-order contact structure on J∞(π), so that by restricting to E(∞) one gets the diffiety , and can study the associated Vinogradov (C-spectral) sequence.
Remark
This article has defined jets of local sections of a bundle, but it is possible to define jets of functions f: M → N, where M and N are manifolds; the jet of f then just corresponds to the jet of the sectiongrf: M → M × Ngrf(p) = (p, f(p))(grf is known as the graph of the function f) of the trivial bundle (M × N, π1, M). However, this restriction does not simplify the theory, as the global triviality of π does not imply the global triviality of π1.
See also
Jet group
Jet (mathematics)
Lagrangian system
Variational bicomplex
References
Further reading
Ehresmann, C., "Introduction à la théorie des structures infinitésimales et des pseudo-groupes de Lie." Geometrie Differentielle, Colloq. Inter. du Centre Nat. de la Recherche Scientifique, Strasbourg, 1953, 97-127.
Kolář, I., Michor, P., Slovák, J., Natural operations in differential geometry.'' Springer-Verlag: Berlin Heidelberg, 1993. , .
Saunders, D. J., "The Geometry of Jet Bundles", Cambridge University Press, 1989,
Krasil'shchik, I. S., Vinogradov, A. M., [et al.], "Symmetries and conservation laws for differential equations of mathematical physics", Amer. Math. Soc., Providence, RI, 1999, .
Olver, P. J., "Equivalence, Invariants and Symmetry", Cambridge University Press, 1995,
Differential topology
Differential equations
Fiber bundles | Jet bundle | Mathematics | 3,740 |
40,887,862 | https://en.wikipedia.org/wiki/Chemical%20phosphorus%20removal | Chemical phosphorus removal is a wastewater treatment method, where phosphorus is removed using salts of aluminum (e.g. alum or polyaluminum chloride), iron (e.g. ferric chloride), or calcium (e.g. lime). Phosphate forms precipitates with the metal ions and is removed together with the sludge in the separation unit (sedimentation tank, flotation tank, etc.).
Aluminum sulfate treatment to reduce phosphorus content of lakes
One method of eutrophication remediation is the application of aluminum sulfate, a salt commonly used in the coagulation process of drinking water treatment. Aluminum sulfate, or "alum" as it is commonly referred, has been found to be an effective lake management tool by reducing the phosphorus load.
Alum was first applied in 1968 to a lake in Sweden. Its first application to an American lake followed in 1970. Today, alum has been utilized with improved effectiveness and understanding. In a large scale study, 114 lakes were monitored for the effectiveness of alum at phosphorus reduction. Across all lakes, alum effectively reduced the phosphorus for 11 years. While there was variety in the longevity (21 years in deep lakes and 5.7 years in shallow lakes), the results express the effectiveness of alum at controlling phosphorus within lakes.
Mechanism
Alum treatment begins with the addition of aluminum sulfate salt to a water body. Once added, the salt dissolves and dissociates, introducing Al(III) ions to the water. The aluminum ions participate in a series of hydrolysis reactions, forming different aluminum species across pH ranges. As more aluminum sulfate is added, water pH decreases. At higher pH, the soluble species Al(OH)4− is present. In neutral pH ranges (6–8), the insoluble aluminum hydroxide (Al(OH)3) occurs. As pH decreases further, the Al(III) ion remains present.
Maintaining optimal pH is important for the removal of phosphorus from water. Phosphorus is most effectively removed at the neutral pH range, when the insoluble aluminum hydroxide is present. This hydroxide functions as a Lewis acid, creating a flocculation environment similar to conventional wastewater treatment. The insoluble Al(OH)3 floc adsorbs phosphorus, as well as other species, and removes them from the water column. As floc adsorption continues, the floc becomes larger, eventually settling to the bottom of the water column in the sediment. The resulting aluminum hydroxide layer covering the lake bottom additionally blocks the diffusion of phosphorus from sediment into the water column, further regulating internally loaded phosphorus.
Implementation
For most alum treatments, aluminum sulfate salt is applied to substrate at the lake's bottom, within the hypolimnion. The alum then reduces phosphorus levels by inactivating the phosphorus released from these lake sediments, thereby controlling phosphorus in the entire water column. This phosphorus supplied from within the lake sediments is known as "internally loaded" phosphorus, as opposed to "externally loaded" phosphorus supplied by sources outside the lake, such as runoff.
Although alum is typically applied to the hypolimnion, reducing phosphorus universally within the lake, it may also be applied to the epilimnion or locally to point sources. This style of alum treatment is similar to the use of alum in conventional water treatment, and is more effective at reducing externally loaded phosphorus than universal application of alum to the hypolimnion. When applied to the epilimnion, boats powered by an outboard motor are deployed onto a lake carrying aluminum sulfate. After determining the necessary dosage and location of the application, the aluminum sulfate is added to the surface of the lake near the wake of the outboard motor. This provides sufficient mixing of the aluminum sulfate within the epilimnion.
The necessary dosage of alum is determined by a variety of parameters. Changes in pH, dissolved oxygen levels, metal content of lake sediment, and lake size are all important for consideration. Alum dosage is calculated by scientist and engineers to increase the effectiveness.
Limitations
Alum treatment is less effective in deep lakes, as well as lakes with substantial external phosphorus loading. In deep lakes, the inactivation of phosphorus is not spread throughout the entire water column, as it is in shallower lakes due to the localization of aluminum hydroxide to the hypolimnion. Furthermore, externally loaded phosphorus often diffuses slowly downward from the lake surface, limiting its interaction with aluminum hydroxide within the hypolimnion and allowing phosphorus accumulation higher in the water column. Therefore, alum treatment is most effectively applied to shallow lakes with primarily internally loaded phosphorus. One exception is point sources of externally loaded phosphorus, which can be effectively regulated by direct application of aluminum sulfate to the source.
Another physical property to be considered is the ability of a lake to withstand mixing in the water column. Lakes with a higher Osgood Index, a parameter used to determine the amount of mixing a that occurs in a lake due to wind, have been found to result in more effective alum treatment. Another parameter is the ratio of the watershed area to the lake surface area. Lakes with lower watershed to lake area ratios experienced greater longevity following treatment. These lakes tend to be correlated with longer residence times and tend to be influenced by internally loaded phosphorus which aids in successful treatment. Regardless of application strategy, repeated alum treatment is often necessary for most lakes every 5 to 15 years. The necessity of repeated treatment requires continuous management and phosphorus monitoring to ensure optimal effectiveness.
Biological implications are another important consideration of alum treatment. Treatments increase water clarity, which has been correlated with increased plant growth at greater depths within the lake. Increased plant growth within lakes changes the character of the substrate, which is sometimes a factor in biodiversity. Lakes with benthic feeding fish such as carp tend to have lower success at removing phosphorus. These species forage in lake sediments which disturbs the aluminum hydroxide flocs binding phosphorus to the lake bottom. An additional concern is that aluminum salts can acidify lakes, making them potentially toxic to aquatic organisms. However, the aluminum sulfate dosage used for lake treatment is not often high enough to pose significant toxicity to fish, although declines in algae and invertebrates have been observed in treated lakes. The alum dosage is also insufficient to cause toxicity in humans, and is often similar to alum doses used in conventional drinking water treatment. To reduce negative biological effects, the accepted limit for dissolved aluminum concentrations in a water body is 50 μg Al/L and pH should be restricted to a range of 5.5-9.
References
External links
Phosphorus removal from wastewater - Lenntech
Water treatment | Chemical phosphorus removal | Chemistry,Engineering,Environmental_science | 1,376 |
22,545,016 | https://en.wikipedia.org/wiki/GSC%2002620-00648 | GSC 02620-00648 is a double star in the constellation Hercules. The brighter of the pair is a magnitude 12 star located approximately 1,660 light-years away. This star is about 1.18 times as massive as the Sun.
Planetary system
In 2006 the TrES program discovered exoplanet TrES-4b using the transit method. This planet orbits the primary star.
Binary star
In 2008 a study was undertaken of 14 stars with exoplanets that were originally discovered using the transit method through relatively small telescopes. These systems were re-examined with the 2.2M reflector telescope at the Calar Alto Observatory in Spain. This star system, along with two others, was determined to be a previously unknown binary star system. The previously unknown secondary star is a dim magnitude 14 K or M-type star separated by about 755 AU from the primary, appearing offset from the primary by about one arc second in the images. This discovery resulted in a recalculation of parameters for both the planet and the primary star.
See also
Trans-Atlantic Exoplanet Survey
List of extrasolar planets
Notes
Note b: The secondary star is identified with a "C" suffix so as to not confuse it with the planetary designation suffix "b".
References
External links
Hercules (constellation)
Planetary transit variables
Planetary systems with one confirmed planet
Binary stars
F-type stars
2 | GSC 02620-00648 | Astronomy | 283 |
2,647,181 | https://en.wikipedia.org/wiki/Round-robin%20DNS | Round-robin DNS is a technique of load distribution, load balancing, or fault-tolerance provisioning multiple, redundant Internet Protocol service hosts, e.g., Web server, FTP servers, by managing the Domain Name System's (DNS) responses to address requests from client computers according to an appropriate statistical model.
In its simplest implementation, round-robin DNS works by responding to DNS requests not only with a single potential IP address, but with a list of potential IP addresses corresponding to several servers that host identical services. The order in which IP addresses from the list are returned is the basis for the term round robin. With each DNS response, the IP address sequence in the list is permuted. Traditionally, IP clients initially attempt connections with the first address returned from a DNS query, so that on different connection attempts, clients would receive service from different providers, thus distributing the overall load among servers.
Some resolvers attempt to re-order the list to give priority to numerically "closer" networks. This behaviour was standardized in RFC 3484 during the definition of IPv6, when applied to IPv4 the bug in Windows Vista and caused issues and defeated round-robin load-balancing. Some desktop clients do try alternate addresses after a connection timeout of up to 30 seconds.
Round-robin DNS is often used to load balance requests among a number of Web servers. For example, a company has one domain name and three identical copies of the same web site residing on three servers with three IP addresses. The DNS server will be set up so that domain name has multiple A records, one for each IP address. When one user accesses the home page it will be sent to the first IP address. The second user who accesses the home page will be sent to the next IP address, and the third user will be sent to the third IP address. In each case, once the IP address is given out, it goes to the end of the list. The fourth user, therefore, will be sent to the first IP address, and so forth.
A round-robin DNS name is, on rare occasions, referred to as a "rotor" due to the rotation among alternative A records.
Drawbacks
Although easy to implement, round-robin DNS has a number of drawbacks, such as those arising from record caching in the DNS hierarchy itself, as well as client-side address caching and reuse, the combination of which can be difficult to manage. Round-robin DNS should not solely be relied upon for service availability. If a service at one of the addresses in the list fails, the DNS will continue to hand out that address and clients will still attempt to reach the inoperable service.
Round-robin DNS may not be the best choice for load balancing on its own, since it merely alternates the order of the address records each time a name server is queried. Because it does not take transaction time, server load, and network congestion into consideration, it works best for services with a large number of uniformly distributed connections to servers of equivalent capacity. Otherwise, it just does load distribution.
Methods exist to overcome such limitations. For example, modified DNS servers (such as lbnamed) can routinely poll mirrored servers for availability and load factor. If a server does not reply as required, the server can be temporarily removed from the DNS pool, until it reports that it is once again operating within specs.
References
Domain Name System
Internet terminology
Fault-tolerant computer systems
Load balancing (computing) | Round-robin DNS | Technology,Engineering | 729 |
31,582,581 | https://en.wikipedia.org/wiki/Dry%20basis | Dry basis is an expression of a calculation in chemistry, chemical engineering and related subjects, in which the presence of water (H2O) (and/or other solvents) is neglected for the purposes of the calculation. Water (and/or other solvents) is neglected because addition and removal of water (and/or other solvents) are common processing steps, and also happen naturally through evaporation and condensation; it is frequently useful to express compositions on a dry basis to remove these effects.
In food science and pharmacy, dry basis also refers to a ratio of the weight of water to the weight of a completely dry material, as opposed to the wet basis ratio of water to a material under normal conditions that contains a measurable amount of moisture.
Example
An aqueous solution containing 2 g of glucose and 2 g of fructose per 100 g of solution contains 2/100=2% glucose on a wet basis, but 2/4=50% glucose on a dry basis. If the solution had contained 2 g of glucose and 3 g of fructose, it would still have contained 2% glucose on a wet basis, but only 2/5=40% glucose on a dry basis.
Frequently concentrations are calculated to a dry basis using the moisture (water) content :
In the example above, the glucose concentration is 2% as is and the moisture content is 96%.
References
Analytical chemistry
Food science | Dry basis | Chemistry | 292 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.