id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
1,251,925 | https://en.wikipedia.org/wiki/Soil%20fertility | Soil fertility refers to the ability of soil to sustain agricultural plant growth, i.e. to provide plant habitat and result in sustained and consistent yields of high quality. It also refers to the soil's ability to supply plant/crop nutrients in the right quantities and qualities over a sustained period of time. A fertile soil has the following properties:
The ability to supply essential plant nutrients and water in adequate amounts and proportions for plant growth and reproduction; and
The absence of toxic substances which may inhibit plant growth e.g. Fe2+ which leads to nutrient toxicity.
The following properties contribute to soil fertility in most situations:
Sufficient soil depth for adequate root growth and water retention;
Good internal drainage, allowing sufficient aeration for optimal root growth (although some plants, such as rice, tolerate waterlogging);
Topsoil or horizon O is with sufficient soil organic matter for healthy soil structure and soil moisture retention;
Soil pH in the range 5.5 to 7.0 (suitable for most plants but some prefer or tolerate more acid or alkaline conditions);
Adequate concentrations of essential plant nutrients in plant-available forms;
Presence of a range of microorganisms that support plant growth.
In lands used for agriculture and other human activities, maintenance of soil fertility typically requires the use of soil conservation practices. This is because soil erosion and other forms of soil degradation generally result in a decline in quality with respect to one or more of the aspects indicated above.
Soil fertility and quality of land have been impacted by the effects of colonialism and slavery both in the U.S. and globally. The introduction of harmful land practices such as intensive and non-prescribed burnings and deforestation by colonists created long-lasting negative results to the environment.
Soil fertility and depletion have different origins and consequences in various parts of the world. The intentional creation of dark earth in the Amazon promotes the important relationship between indigenous communities and their land. In African and Middle Eastern regions, humans and the environment are also altered due to soil depletion.
Soil fertilization
Bioavailable phosphorus (available to soil life) is the element in soil that is most often lacking. Nitrogen and potassium are also needed in substantial amounts. For this reason these three elements are always identified on a commercial fertilizer analysis. For example, a 10-10-15 fertilizer has 10 percent nitrogen, 10 percent available phosphorus (P2O5) and 15 percent water-soluble potassium (K2O). Sulfur is the fourth element that may be identified in a commercial analysis—e.g. 21-0-0-24 which would contain 21% nitrogen and 24% sulfate.
Inorganic fertilizers are generally less expensive and have higher concentrations of nutrients than organic fertilizers. Also, since nitrogen, phosphorus and potassium generally must be in the inorganic forms to be taken up by plants, inorganic fertilizers are generally immediately bioavailable to plants without modification. However, studies suggest that chemical fertilizers have adverse health impacts on humans including the development of chronic disease from the toxins. As for the environment, over-reliance on inorganic fertilizers disrupts the natural nutrient balance in the soil, resulting in lower soil quality, loss of organic matter, and higher chances for erosion in the soil.
Additionally, the water-soluble nitrogen in inorganic fertilizers does not provide for the long-term needs of the plant and creates water pollution. Slow-release fertilizers may reduce leaching loss of nutrients and may make the nutrients that they provide available over a longer period of time.
Soil fertility is a complex process that involves the constant cycling of nutrients between organic and inorganic forms. As plant material and animal wastes are decomposed by micro-organisms, they release inorganic nutrients to the soil solution, a process referred to as mineralization. Those nutrients may then undergo further transformations which may be aided or enabled by soil micro-organisms. Like plants, many micro-organisms require or preferentially use inorganic forms of nitrogen, phosphorus or potassium and will compete with plants for these nutrients, tying up the nutrients in microbial biomass, a process often called immobilization. The balance between immobilization and mineralization processes depends on the balance and availability of major nutrients and organic carbon to soil microorganisms. Natural processes such as lightning strikes may fix atmospheric nitrogen by converting it to (NO2). Denitrification may occur under anaerobic conditions (flooding) in the presence of denitrifying bacteria. Nutrient cations, including potassium and many micronutrients, are held in relatively strong bonds with the negatively charged portions of the soil in a process known as cation exchange.
Phosphorus is a primary factor of soil fertility as it is an element of plant nutrients in the soil. It is essential for cell division and plant development, especially in seedlings and young plants. However, phosphorus is becoming increasingly harder to find and its reserves are starting to be depleted due to the excessive use as a fertilizer. The widespread use of phosphorus in fertilizers has led to pollution and eutrophication. Recently the term peak phosphorus has been coined, due to the limited occurrence of rock phosphate in the world.
A wide variety of materials have been described as soil conditioners due to their ability to improve soil quality, including biochar, offering multiple soil health benefits.
Food waste compost was found to have better soil improvement than manure based compost.
Light and CO2 limitations
Photosynthesis is the process whereby plants use light energy to drive chemical reactions which convert CO2 into sugars. As such, all plants require access to both light and carbon dioxide to produce energy, grow and reproduce.
While typically limited by nitrogen, phosphorus and potassium, low levels of carbon dioxide can also act as a limiting factor on plant growth. Peer-reviewed and published scientific studies have shown that increasing CO2 is highly effective at promoting plant growth up to levels over 300 ppm. Further increases in CO2 can, to a very small degree, continue to increase net photosynthetic output.
Soil depletion
Soil depletion occurs when the components which contribute to fertility are removed and not replaced, and the conditions which support soil's fertility are not maintained. This leads to poor crop yields. In agriculture, depletion can be due to excessively intense cultivation and inadequate soil management. Depletion may occur through a variety of other effects, including overtillage (which damages soil structure), underuse of nutrient inputs which leads to mining of the soil nutrient bank, and salinization of soil.
Colonial Impacts on Soil Depletion
Soil fertility can be severely challenged when land-use changes rapidly. For example, in Colonial New England, colonists made a number of decisions that depleted the soils, including: allowing herd animals to wander freely, not replenishing soils with manure, and a sequence of events that led to erosion. William Cronon wrote that "...the long-term effect was to put those soils in jeopardy. The removal of the forest, the increase in destructive floods, the soil compaction and close-cropping wrought by grazing animals, ploughing—all served to increase erosion." Cronon continues, explaining, “Where mowing was unnecessary and grazing among living trees was possible, settlers saved labor by simply burning the forest undergrowth...and turning loose their cattle...In at least one ill-favored area, the inhabitants of neighboring towns burned so frequently and graze so intensively that…the timber was greatly injured, and the land became hard to subdue...In the long run, cattle tended to encourage the growth of woody, thorn-bearing plants which they could not eat and which, once established, were very difficult to remove”. These practices were methods of simplifying labor for colonial settlers in new lands when they were not familiar with traditional Indigenous agricultural methods. Those Indigenous communities were not consulted but rather forced out of their homelands so European settlers could commodify their resources. The practice of intensive land burning and turning loose cattle ruined soil fertility and prohibited sustainable crop growth.
While colonists utilized fire to clear land, certain prescribed burning practices are common and valuable to increase biodiversity and in turn, benefit soil fertility. Without consideration of the intensity, seasonality, and frequency of the burns, the conservation of biodiversity and the overall health of the soil can be negatively impacted by fire.
In addition to soil erosion through using too much or too little fire, colonial agriculture also resulted in topsoil depletion. Topsoil depletion occurs when the nutrient-rich organic topsoil, which takes hundreds to thousands of years to build up under natural conditions, is eroded or depleted of its original organic material. The Dust Bowl in the Great Plains of North America is a great example of this with about one-half of the original topsoil of the great plains having disappeared since the beginning of agricultural production there in the 1880s. Outside of the context of colonialism topsoil depletion can historically be attributed to many past civilizations' collapses.
Soil Depletion and Enslavement
As historian David Silkenat explains, the goals of Southern plantation and slave owners, instead of measuring productivity based on outputs per acre, were to maximize the amount of labor that could be extracted from the enslaved workforce. The landscape was seen as disposable, and the African slaves were seen as expendable. Once these Southern farmers forced slaves to leach soils and engage in mass deforestation, they would discard the land and move towards more fertile prospects. The forced slave practices created extensive destruction on the land. The environmental impact included draining swamps, clearing forests for monocropping and fuel steamships, and introducing invasive species, all leading to fragile ecosystems. In the aftermath, these ecosystems left hillsides eroded, rivers clogged with sterile soil, and extinction of native species. Silkenat summarizes this phenomenon of the relationship between enslavement and soil, “Although typically treated separately, slavery and the environment naturally intersect in complex and powerful ways, leaving lasting effects from the period of emancipation through modern-day reckonings with racial justice…the land too fell victim to the slave owner’s lash”.
Global Soil Depletion
One of the most widespread occurrences of soil depletion is in tropical zones where nutrient content of soils is low. The depletion of soil has affected the state of plant life and crops in agriculture in many countries. In the Middle East for example, many countries find it difficult to grow produce because of droughts, lack of soil, and lack of irrigation. The Middle East has three countries that indicate a decline in crop production, the highest rates of productivity decline are found in hilly and dryland areas.
Many countries in Africa also undergo a depletion of fertile soil. In regions of dry climate like Sudan and the countries that make up the Sahara Desert, droughts and soil degradation is common. Cash crops such as teas, maize, and beans require a variety of nutrients in order to grow healthy. Soil fertility has declined in the farming regions of Africa and the use of artificial and natural fertilizers has been used to regain the nutrients of ground soil.
Dark Earths
South America
The details of Indigenous societies prior to European colonization in 1492 within the Amazonian regions of South America, particularly the size of the communities and the depth of interactions with the environment, are continually debated. Central to the debate is the influence of Dark Earth. Dark Earth is a type of soil found in the Amazon that has a darker color, higher organic carbon content, and higher fertility than soil in other regions of South America which makes it highly coveted even today. Dark Earth deposits have been found, through ethnographic and archaeological studies, to have been created through ancient Indigenous practices by intentional soil management.
Ethnoarchaeologist Morgan Schmidt outlines how this carbon-rich soil was intentionally created by communities in the Amazon. While Dark Earth, and other anthropic soils, can be found all throughout the world, Amazonian Dark Earth is particularly significant because “it contrasts too sharply with the especially poor fertility of typical highly weathered tropical upland soils in the Amazon”. There is much evidence to suggest that the development of ancient agricultural societies in the Amazon was strongly influenced by the formation of Dark Earth. As a result, Amazonian societies benefitted from the dark earth in terms of agricultural success and enhanced food production. Soil analyses have been completed on the modern and ancient Kuikuro Indigenous Territory in the Upper Xingu River basin in southeastern Amazonia through archaeological and ethnographic research to determine the human relation to the soil. The “results demonstrate the intentional creation of dark earth, highlighting how Indigenous knowledge can provide strategies for sustainable rainforest management”.
Africa
In Egypt, earthworms of the Nile River Valley contributed to the significant fertility of the soils. As a result, Cleopatra declared the earthworm and sacred animal to recognize the animal’s positive impact. No one, including farmers, was “allowed to harm or remove an earthworm for fear of offending the deity of fertility”. In Ghana and Liberia, it is a long-withstanding practice to combine different types of waste to create fertile soil that is referred to as African Dark Earths. This soil, contains high concentrations of calcium, phosphorus, and carbon.
Humans and Soil
Albert Howard is credited as the first Westerner to publish Native techniques of sustainable agriculture. As noted by Howard in 1944, “In all future studies of disease we must, therefore, always begin with the soil. This must be gotten into good condition first of all and then the reaction of the soil, the plant, animal, and man observed. Many diseases will then automatically disappear...Soil fertility is the basis of the public health system of the future...”. Howard connects the health crises of crops to the impacts of livestock and human health, ultimately spreading the message that humans must respect and restore the soil for the benefit of the human and non-human world. He continues that industrial agriculture disrupts the delicate balance of nature and irrevocably robs the soil of its fertility.
Irrigation effects
Irrigation is a process by which crops are watered by man-made means, such as bringing in water from pipes, canals, or sprinklers. Irrigation is used when the natural rainfall patterns of a region are not sustainable enough to maintain crops. Ancient civilizations heavily relied on irrigation and today about 18% of the world's cropland is irrigated. The quality of irrigation water is very important to maintain soil fertility and tilth, and for using more soil depth by the plants. When soil is irrigated with high alkaline water, unwanted sodium salts build up in the soil which would make soil draining capacity very poor. So plant roots can not penetrate deep into the soil for optimum growth in Alkali soils. When soil is irrigated with low pH / acidic water, the useful salts (Ca, Mg, K, P, S, etc.) are removed by draining water from the acidic soil and in addition unwanted aluminium and manganese salts to the plants are dissolved from the soil impeding plant growth. When soil is irrigated with high salinity water or sufficient water is not draining out from the irrigated soil, the soil would convert into saline soil or lose its fertility. Saline water enhance the turgor pressure or osmotic pressure requirement which impedes the off take of water and nutrients by the plant roots.
Top soil loss takes place in alkali soils due to erosion by rain water surface flows or drainage as they form colloids (fine mud) in contact with water. Plants absorb water-soluble inorganic salts only from the soil for their growth. Soil as such does not lose fertility just by growing crops but it lose its fertility due to accumulation of unwanted and depletion of wanted inorganic salts from the soil by improper irrigation and acid rain water (quantity and quality of water). The fertility of many soils which are not suitable for plant growth can be enhanced many times gradually by providing adequate irrigation water of suitable quality and good drainage from the soil.
Global distribution
See also
Arable land
Plaggen soil
Shifting cultivation
Soil contamination
Soil life
Terra preta
Cation-exchange capacity
References
Soil
Soil improvers
Fertilizers
Horticulture | Soil fertility | Chemistry | 3,338 |
3,057,073 | https://en.wikipedia.org/wiki/John%20Leighfield | John Percival Leighfield (born 1938) is a British IT industry businessman and was previously chairman of RM plc from 1993 until 2011.
Currently John Leighfield is a Director of Getmapping, a UK supplier of aerial photography, mapping products and data hosting solutions. He is also Chairman of Governors of the WMG Academy Trust (which operates two University technical colleges).
John Leighfield was born in Oxford, England, and was a pupil at Magdalen College School. He then read Greats at Exeter College, Oxford. He has an MA from Oxford, Honorary Doctorates from the University of Central England in Birmingham (DUniv), from De Montfort University (DTech), from Wolverhampton University (DTech) and from the University of Warwick (DLL). He is a Fellow of the RSA, RGS, CMI, IET, and BCS.
Leighfield has pursued a career in IT, initially in the 1960s with the Ford Motor Company, where he did pioneering work on computer systems in finance and manufacturing, Plessey (where he was head of management services) and British Leyland (from the early 1970s). In 1987, he led an employee buy out of Istel Ltd, which he had established as a subsidiary of British Leyland. In 1989, the company was subsequently taken over by AT&T. He was the executive chairman of AT&T Istel until April 1993.
In November 1993, he joined RM (a British educational computing company) as a non-executive director and in October 1994 became the non-executive chairman. He has been a non-executive director of a number of other companies as well, including Halifax plc and Synstar plc (of which he is also non-executive chairman).
Leighfield was president of the British Computer Society (1993–4) and the Computing Services and Software Association (1995–6). He is president of the Institute for the Management of Information Systems (IMIS), a UK professional association. He has been a member of the council of University of Warwick, chairman of the advisory board, and an honorary visiting professor at the Warwick Business School. He was pro-chancellor and chairman of the council at the University of Warwick from 2002 to 2011.
In the Queen's Birthday Honours 1998 Leighfield was appointed as a Commander of the Most Excellent Order of the British Empire. In 2006, Leighfield was awarded the Mountbatten Medal.
In 2005, he was appointed as a non-executive director of Getmapping plc and Master of the Worshipful Company of Information Technologists.
Leighfield lives in Oxford. He was formerly Chairman of the Governors of Magdalen College School. He is Chairman of the Oxford Philomusica Advisory Council, the Resident Professional Orchestra at the University of Oxford. In his spare time, he has an interest in maps, especially of Oxfordshire. He is married with children and grandchildren.
On 15 January 2016 Leighfield gave an in-depth interview to Alan Cane, Former Editor of the Financial Times, on his life and career for Archives of IT.
References
External links
Synstar information
BCS Strategic Panel Members
Intellect UK information
BCS Oxfordshire Branch photograph
1938 births
Living people
Businesspeople from Oxford
Alumni of Exeter College, Oxford
British businesspeople
Businesspeople in computing
People associated with the University of Warwick
Fellows of the British Computer Society
Fellows of the Royal Geographical Society
Fellows of the Institution of Engineering and Technology
Commanders of the Order of the British Empire
Presidents of the British Computer Society
Masters of the Worshipful Company of Information Technologists
Masters of the Worshipful Company of Educators | John Leighfield | Engineering | 723 |
50,167,582 | https://en.wikipedia.org/wiki/Equation%20xy%20%3D%20yx | In general, exponentiation fails to be commutative. However, the equation has solutions, such as
History
The equation is mentioned in a letter of Bernoulli to Goldbach (29 June 1728). The letter contains a statement that when the only solutions in natural numbers are and although there are infinitely many solutions in rational numbers, such as and .
The reply by Goldbach (31 January 1729) contains a general solution of the equation, obtained by substituting A similar solution was found by Euler.
J. van Hengel pointed out that if are positive integers with , then therefore it is enough to consider possibilities and in order to find solutions in natural numbers.
The problem was discussed in a number of publications. In 1960, the equation was among the questions on the William Lowell Putnam Competition, which prompted Alvin Hausner to extend results to algebraic number fields.
Positive real solutions
Main source:
Explicit form
An infinite set of trivial solutions in positive real numbers is given by Nontrivial solutions can be written explicitly using the Lambert W function. The idea is to write the equation as and try to match and by multiplying and raising both sides by the same value. Then apply the definition of the Lambert W function to isolate the desired variable.
Where in the last step we used the identity .
Here we split the solution into the two branches of the Lambert W function and focus on each interval of interest, applying the identities:
:
:
:
:
Hence the non-trivial solutions are:
Parametric form
Nontrivial solutions can be more easily found by assuming and letting
Then
Raising both sides to the power and dividing by , we get
Then nontrivial solutions in positive real numbers are expressed as the parametric equation
The full solution thus is
Based on the above solution, the derivative is for the pairs on the line and for the other pairs can be found by which straightforward calculus gives as:
for and
Setting or generates the nontrivial solution in positive integers,
Other pairs consisting of algebraic numbers exist, such as and , as well as and .
The parameterization above leads to a geometric property of this curve. It can be shown that describes the isocline curve where power functions of the form have slope for some positive real choice of . For example, has a slope of at which is also a point on the curve
The trivial and non-trivial solutions intersect when . The equations above cannot be evaluated directly at , but we can take the limit as . This is most conveniently done by substituting and letting , so
Thus, the line and the curve for intersect at .
As , the nontrivial solution asymptotes to the line . A more complete asymptotic form is
Other real solutions
An infinite set of discrete real solutions with at least one of and negative also exist. These are provided by the above parameterization when the values generated are real. For example, , is a solution (using the real cube root of ). Similarly an infinite set of discrete solutions is given by the trivial solution for when is real; for example .
Similar graphs
Equation
The equation produces a graph where the line and curve intersect at . The curve also terminates at (0, 1) and (1, 0), instead of continuing on to infinity.
The curved section can be written explicitly as
This equation describes the isocline curve where power functions have slope 1, analogous to the geometric property of described above.
The equation is equivalent to as can be seen by raising both sides to the power Equivalently, this can also be shown to demonstrate that the equation is equivalent to .
Equation
The equation produces a graph where the curve and line intersect at (1, 1). The curve becomes asymptotic to 0, as opposed to 1; it is, in fact, the positive section of y = 1/x.
References
External links
Diophantine equations
Recreational mathematics | Equation xy = yx | Mathematics | 780 |
76,614,588 | https://en.wikipedia.org/wiki/Tauri%20%28software%20framework%29 | Tauri is an open-source software framework designed to create cross-platform desktop and mobile applications on Linux, macOS, Windows, Android and iOS using a web frontend. The framework functions with a Rust back-end and a JavaScript front-end that runs on local WebView libraries using rendering libraries like Tao and Wry. Tauri aims to provide a more lightweight alternative to similar existing frameworks such as Electron.
Tauri is governed by the Tauri Foundation within the Dutch non-profit Commons Conservancy. As of 2024, Tauri is licensed and distributed under the MIT license, and Apache 2.0 license.
Tauri 1.0 was released in June 2020. In early 2024, Tauri v2 Beta was released, which included mobile support for iOS and Android systems. On 2 October 2024, Tauri v2 was released as a stable release.
Architecture
Central to Tauri's architecture are core components such as the Tauri crate, which serves as a hub for managing various functionalities like runtimes, macros, utilities, and APIs. The toolkit also includes essential tooling such as bundlers, CLI interfaces, and scaffolding kits, to streamline the development and deployment processes. Tauri supports cross-platform application window creation (TAO) and WebView rendering (WRY), which allows compatibility across macOS, Linux and Windows platforms.
Tauri is built using Rust, a programming language emphasizing performance, type safety, and memory safety. It also allows users the function to switch individual APIs on and off, and provides an isolation pattern to prevent untrusted scripts from accessing the back-end from a WebView.
See also
References
External links
2020 software
Cross-platform desktop-apps development
Cross-platform free software
Free software for Linux
Free software for Windows
Free software for macOS
Software using the MIT license
Free software programmed in Rust | Tauri (software framework) | Technology | 388 |
1,207,129 | https://en.wikipedia.org/wiki/Location%20arithmetic | Location arithmetic (Latin arithmetica localis) is the additive (non-positional) binary numeral systems, which John Napier explored as a computation technique in his treatise Rabdology (1617), both symbolically and on a chessboard-like grid.
Napier's terminology, derived from using the positions of counters on the board to represent numbers, is potentially misleading because the numbering system is, in facts, non-positional in current vocabulary.
During Napier's time, most of the computations were made on boards with tally-marks or jetons. So, unlike how it may be seen by the modern reader, his goal was not to use moves of counters on a board to multiply, divide and find square roots, but rather to find a way to compute symbolically with pen and paper.
However, when reproduced on the board, this new technique did not require mental trial-and-error computations nor complex carry memorization (unlike base 10 computations). He was so pleased by his discovery that he said in his preface:
Location numerals
Binary notation had not yet been standardized, so Napier used what he called location numerals to represent binary numbers. Napier's system uses sign-value notation to represent numbers; it uses successive letters from the Latin alphabet to represent successive powers of two: a = 20 = 1, b = 21 = 2, c = 22 = 4, d = 23 = 8, e = 24 = 16 and so on.
To represent a given number as a location numeral, that number is expressed as a sum of powers of two and then each power of two is replaced by its corresponding digit (letter). For example, when converting from a decimal numeral:
87 = 1 + 2 + 4 + 16 + 64 = 20 + 21 + 22 + 24 + 26 = abceg
Using the reverse process, a location numeral can be converted to another numeral system. For example, when converting to a decimal numeral:
abdgkl = 20 + 21 + 23 + 26 + 210 + 211 = 1 + 2 + 8 + 64 + 1024 + 2048 = 3147
Napier showed multiple methods of converting numbers in and out of his numeral system. These methods are similar to modern methods of converting numbers in and out of the binary numeral system, so they are not shown here. Napier also showed how to add, subtract, multiply, divide, and extract square roots.
Abbreviated and extended form
As in any numeral system using sign-value notation (but not those using positional notation), digits (letters) can be repeated such that multiple numerals can represent a single number. For example:
abbc = acc = ad = 9
Additionally, the order of digits does not matter. For example:
abbc = bbca = bcba = ... = 9
Because each digit in a location numeral represents twice the value of its next-lower digit, replacing any two occurrences of the same digit with one of the next-higher digit does not change the numeral's numeric value. Thus, repeatedly applying the rules of replacement aa → b, bb → c, cc → d, etc. to a location numeral removes all repeated digits from that numeral.
Napier called this process abbreviation and the resulting location numeral the abbreviated form of that numeral; he called location numerals containing repeated digits extended forms. Each number can be represented by a unique abbreviated form, not considering the order of its digits (e.g., abc, bca, cba, etc. all represent the number 7).
Arithmetic
Addition
Location numerals allow for a simple and intuitive algorithm for addition:
join the numerals end-to-end
when necessary, rearrange this conjoined numeral's digits so they are in ascending order
abbreviate this rearranged and conjoined numeral
For example, to add 157 = acdeh and 230 = bcfgh, join the numerals end-to-end:
acdeh + bcfgh → acdehbcfgh
rearrange the digits of the previous result (because the digits of acdehbcfgh are not in ascending order):
acdehbcfgh → abccdefghh
and abbreviate the previous result:
abccdefghh → abddefghh → abeefghh → abffghh → abgghh → abhhh → abhi
The final result, abhi, equals 387 (abhi = 20 + 21 + 27 + 28 = 1 + 2 + 128 + 256 = 387); this is the same result achieved by adding 157 and 230 in decimal notation.
Subtraction
Subtraction is also intuitive, but may require expanding abbreviated forms to extended forms to perform borrows.
Write the minuend (the largest number you want to diminish) and remove from it all the digits appearing in the subtrahend (the smallest number). In case the digit to be removed does not appear in the minuend, then borrow it by expanding the unit just larger. Repeat until all the digit of the subtrahend have been removed.
A few examples show it is simpler than it sounds :
Subtract 5 = ac from 77 = acdg :
acdg - ac = acdg = dg = 8+64 = 72.
Subtract 3 = ab from 77 = acdg :
acdg - ab = abbdg - ab = abbdg = bdg = 2+8+64 = 74.
Subtract 7 = abc from 77 = acdg :
acdg - abc = abbccg - abc = abbccg = bcg = 2+4+64 = 70.
Doubling, halving, odd and even
Napier proceeded to the rest of arithmetic, that is multiplication, division and square root, on an abacus, as it was common in his times. However, since the development of micro-processor computer, a lot of applicable algorithms have been developed or revived based on doubling and halving.
Doubling is done by adding a numeral to itself, which mean doubling each of its digit. This gives an extended form, which has to be abbreviated if needed. This operation can be done in one step by changing each digit of a numeral to the next larger digit. For example, the double of a is b, the double of b is c, the double of ab is bc, the double of acfg is bdgh, etc.
Similarly, multiplying by a power of two, is just translating its digits. To multiply by c = 4, for example, is transforming the digits a → c, b → d, c → e,...
Halving is the reverse of doubling: change each digit to the next smaller digit. For example, the half of bdgh is acfg.
One sees immediately that it is only feasible when the numeral to be halved does not contain an a (or, if the numeral is extended, an odd number of as). In other words, an abbreviated numeral is odd if it contains an a and even if it does not.
With these basic operations (doubling and halving), we can adapt all the binary algorithms starting by, but not limited to, the Bisection method and Dichotomic search.
Multiplication
Napier performed multiplication and division on an abacus, as was common in his times. However, Egyptian multiplication gives an elegant way to carry out multiplication without tables using only doubling, halving and adding.
Multiplying a single-digit number by another single-digit number is a simple process. Because all letters represent a power of 2, multiplying digits is the same as adding their exponents. This can also be thought of as finding the index of one digit in the alphabet (a = 0, b = 1, ...) and incrementing the other digit by that amount in terms of the alphabet (b + 2 => d).
For example, multiply 4 = c by 16 = e
c * e = 2^2 * 2^4 = 2^6 = g
or...
AlphabetIndex(c) = 2, so... e => f => g
To find the product of two multiple digit numbers, make a two column table. In the left column write the digits of the first number, one below the other. For each digit in the left column, multiply that digit and the second number and record it in the right column. Finally, add all the numbers of the right column together.
As an example, multiply 238 = bcdfgh by 13 = acd
{|
|-
| a || bcdfgh
|-
| c || defhij
|-
| d || efgijk
|}
The result is the sum in the right column = = bcekl = 2+4+16+1024+2048 = 3094.
It is interesting to notice that the left column can also be obtained by successive halves of the first number, from which the even numbers are removed. In our example, acd, bc (even), ab, a. Noticing that the right column contains successive doubles of the second number, shows why the peasant multiplication is exact.
Division, remainder
Division can be carried out by successive subtractions: the quotient is the number of time the divisor can be subtracted from the dividend, and the remainder is what is left after all the possible subtractions.
This process, which can be very long, may be made efficient if instead of the divisor we subtract multiple of the divisor, and computations are easier if we restrict to multiple by a power of 2.
In fact, this is what we do in the long division method.
The grid
Location arithmetic uses a square grid where each square on the grid represents a value. Two sides of the grid are marked with
increasing powers of two. Any inner square can be identified by two numbers on these two sides, one being vertically below the inner
square and the other to its far right. The value of the square is the product of these two numbers.
For instance, the square in this example grid represents 32, as it is the product of 4 on the right column and 8 from the bottom row. The grid itself can be any size, and larger grids simply permit us to handle larger numbers.
Notice that moving either one square to the left or one square up doubles the value. This property can be used to perform binary
addition using just a single row of the grid.
Addition
First, lay out a binary number on a row using counters to represent the 1s in the number. For example, 29 (= 11101 in binary) would be placed on the board like this:
The number 29 is clearly the sum of the values of the squares on which there are counters. Now overlay the second number on this row. Say we place 9 (= 1001 in binary) on it like this.
The sum of these two numbers is just the total value represented by the counters on the board, but some of the squares have more than one counter. Recall however, that moving to the left of a square doubles its value. So we replace two counters on a square with one counter to its left without changing the total value on the board. Note that this is the same idea used to abbreviate
location numerals. Let's start by replacing the rightmost pair of counters with a counter to its left, giving:
We still have another square with two counters on it, so we do it again:
But replacing this pair created another square with two counters on it, so we replace a third time:
Now each square has just one counter, and reading off the result in binary 100110 (= 38) gives the correct result.
Subtraction
Subtracting is not much more complicated than addition: instead of adding counters on the board we remove them. To "borrow" a value, we replace a counter on a square with two to its right.
Let's see how we might subtract 12 from 38. First place 38 (= 100110 in binary) on a row, and then place 12 (= 1100 in binary) under it:
For every counter on the lower row that has a counter above it, remove both counters. We can remove one such pair on the board,
resulting in:
Now we need to "borrow" counters to get rid of the remaining counter on the bottom. First replace the leftmost counter on the top row with two to its right:
Now replace one of the two counters with two more to its right, giving:
We can now take away one of the counters on the top row with the remaining counter on the bottom row:
and read off 26, the final result.
Some properties of the grid
Unlike addition and subtraction, the entire grid is used to multiply, divide, and extract square roots. The grid has some useful properties utilized in these operations. First, all the squares on any diagonal going from the bottom left to the top right have the same value.
Since a diagonal move can be broken down into a move to the right (which halves the value) followed by a move
up (which doubles the value), the value of the square stays the same.
In conjunction with that diagonal property, there is a quick way to divide the numbers on the bottom and right edges of the grid.
Locate the dividend 32 along the right side and the divisor 8 on the bottom edge of the grid. Extend a diagonal from the dividend and locate the square where it intersects a vertical line from the divisor. The quotient lies at the right end of the grid from this square, which for our example is 4.
Why does this work? Moving along the diagonal does not change the value; the value of the square on the intersection is still the dividend. But we also know it is the product of the squares along the bottom and right edge. Since the square on the bottom edge is the divisor, the square on the right edge is the quotient.
Napier extends this idea to divide two arbitrary numbers, as shown below.
Multiplication
To multiply a pair of binary numbers, first mark the two numbers
on the bottom and the right side of the grid. Say we want to
multiply 22 (= 10110) by 9 (= 1001).
Now place counters at every "intersection" of vertical and
horizontal rows of the 1s in each number.
Notice that each row of counters on the grid is just
22 multiplied by some
power of two. In fact, the total value of the counters is the
sum of two rows
22*8 + 22*1 = 22*(8+1) = 22*9
So the counters on the board actually represent the product
of the two numbers, except it is not possible to "read off" the
answer just yet.
Recall that moving counters diagonally does not change the value,
so move all the counters on inner squares diagonally until they
hit either the bottom row or the left column.
Now we make the same moves we did for addition. Replace two counters on a square with one to its left. If the square is on the left column, replace two counters with one above it. Recall that the value of a square doubles if you move up, so this does not change the value on the grid.
Let's first replace the two counters on the second square at the bottom with one to its left which leaves two counters at the corner.
Finally, replace the two counters on the corner with one above it
and "read off" the binary number in an L-shaped fashion, starting from
the top left down to the bottom left corner, and then over to the
bottom right.
Read the counters along the L but do not double count the corner square.
You will read the binary result 11000110 = 198 which is indeed 22*9.
Why can we read the binary number in this L-shaped fashion? The
bottom row is of course just the first six powers of two, but
notice that the leftmost column has the next five powers of
two. So we can directly read off an 11 digit binary number from
the L-shaped set of 11 squares that lie along the left and bottom
sides of the grid.
Our small 6x6 grid can only multiply numbers each up to 63, and in
general an nxn grid can multiply two numbers each up to
2n-1. This scales very fast, so board with 20 numbers per side, for
instance, can multiply numbers each up to a little over one million.
Division
Martin Gardner presented a slightly easier to understand
version of Napier's division method, which is what is
shown here.
Division works pretty much the reverse of multiplication. Say we want
to divide 485 by 13. First place counters for 485 (= 111100101) along
the bottom edge and mark 13 (= 1101) along the right edge. To save
space, we'll just look at a rectangular portion of the board because
that's all we actually use.
Starting from the left, the game is to move counters diagonally into
"columns of divisors" (that is, with one counter on each row marked
with a 1 from the divisor.) Let's demonstrate this with the leftmost
block of counters.
Now the next block of counters we might try would begin with the
leftmost counter on the bottom, and we might attempt something like
except that we do not have any counters that we can move diagonally from the bottom edge into squares that would form the rest of the "column of divisors."
In such cases, we instead "double down" the counter on the bottom row and form a column one over to the right. As you will soon see, it will always be possible to form a column this way. So first replace the counter on the bottom with two to its right.
and then move one diagonally to the top of the column, and move
another counter located on the edge of the board into its spot.
It looks like we still do not have a counter on the bottom edge to move
diagonally into the remaining square, but notice that we can instead
double down the leftmost counter again and then move it into the
desired square.
and now move one counter diagonally to where we want it.
Let's proceed to build the next column. Once again, notice that moving the leftmost counter to the top of the column does not leave enough counters at the bottom to fill in the remaining squares.
So we double down the counter and move one diagonally into the next column over. Let's also move the rightmost counter into the column, and here is how it looks after these steps.
We still have a missing square, but we just double down again and move
the counter into this spot and end up with
At this point, the counter on the bottom edge is so far to the right
that it cannot go diagonally to the top of any column, which signals
that we are done.
The result is "read" off the columns—each column with counters is
treated as a 1 and empty columns are 0. So the result is
100101 (= 37) and the remainder is the binary value of any counters
still left along the bottom edge. There is one counter on the third
column from the right, so we read it as 100 (= 4) and we get 485
÷ 13 = 37 with a remainder 4.
See also
Jeton
References
John Napier; translated by William Frank Richardson; introduction by Robin E. Rider (1990). Rabdology. MIT Press. .
Martin Gardner (1986). Knotted doughnuts and other mathematical entertainments. W. H. Freeman and Company. .
Specific
External links
Javascript simulation of Location arithmetic
Mathematical tools
Arithmetic | Location arithmetic | Mathematics,Technology | 4,098 |
43,530,417 | https://en.wikipedia.org/wiki/Badge%20tether | A badge tether or badge reel is a spring-loaded reeled tether that resembles a button badge in appearance or attachment. It is used to avoid damage to or the loss of small important objects kept on-person that need to be accessed frequently or quickly, such as a ski pass, identification card or badge, name badge, keys, a phone or other handheld device, or a penknife or other small tool.
Badge tethers consist of a thin cord, dimensions on the order of a millimetre diameter by a metre long with one end wound round a spring-loaded reel contained within a small badge-like body that has a clip for a belt, belt loop, pocket, the edge of the clothing itself, or an attachment specifically for such a tether. The other end of the cord has a clip, loop, splitring, strap, or other fastener.
Hardware (mechanical) | Badge tether | Physics,Technology,Engineering | 187 |
50,403,904 | https://en.wikipedia.org/wiki/Decapping | Decapping (decapsulation) or delidding of an integrated circuit (IC) is the process of removing the protective cover or integrated heat spreader (IHS) of an integrated circuit so that the contained die is revealed for visual inspection of the micro circuitry imprinted on the die. This process is typically done in order to debug a manufacturing problem with the chip, or possibly to copy information from the device, to check for counterfeit chips or to reverse engineer it. Companies such as TechInsights and ChipRebel decap, take die shots of, and reverse engineer chips for customers. Modern integrated circuits can be encapsulated in plastic, ceramic, or epoxy packages.
Delidding may also be done to test the chip for radiation-tolerance with a heavy-ion beam or in an effort to reduce the operating temperatures of an integrated circuit such as a processor, by replacing the thermal interface material (TIM) between the die and the IHS with a higher-quality TIM. With care, it's possible to decap a device and still leave it functional.
Method
Decapping is usually carried out by chemical etching of the covering, laser cutting, laser evaporation of the covering, plasma etching or mechanical removal of the cover using a milling machine, saw blade, using hot air or by desoldering and cutting. The process can be either destructive or non-destructive of the internal die.
Chemical etching usually involves subjecting the (if made of plastic) IC package to concentrated or fuming nitric acid, heated concentrated sulfuric acid, white fuming nitric acid or a mixture of the two for some time, possibly while applying heat externally with a hot plate or hot air gun, which dissolve the package while leaving the die intact. The acids are dangerous, so protective equipment such as appropriate gloves, full face respirator with appropriate acid cartridges, a lab coat and a fume hood are required.
Laser decapping scans a high power laser beam across the plastic IC package to vaporize it, while avoiding the actual silicon die.
In a common version of non-destructive, mechanical delidding, one removes the IHS of an IC such as a computer processor using an oven to soften the solder (if present) between the IHS and the die(s) and using a knife to cut the adhesive in the periphery of the IHS, which joins the IHS with the processor package substrate, which is often a specialized printed circuit board often only called a substrate or sometimes an interposer. In many processors the dies are also soldered to the IHS which can still be removed by applying heat until the solder melts, and removing the IHS while the solder is still liquid. The die(s) are mounted on the substrate using flip chip.
Gallery
See also
Die shot
Reverse engineering
Sample preparation equipment
References
Integrated circuits | Decapping | Technology,Engineering | 597 |
45,507,871 | https://en.wikipedia.org/wiki/Penicillium%20flavisclerotiatum | Penicillium flavisclerotiatum is a species of the genus of Penicillium which was isolated from soil of the Stellenbosch mountain in Fynbos in South Africa.
See also
List of Penicillium species
References
flavisclerotiatum
Fungi described in 2014
Fungus species | Penicillium flavisclerotiatum | Biology | 67 |
77,588,892 | https://en.wikipedia.org/wiki/NGC%207363 | NGC 7363 is a barred spiral galaxy in the constellation of Pegasus. Its velocity with respect to the cosmic microwave background is 6393 ± 24 km/s, which corresponds to a Hubble distance of 94.29 ± 6.61 Mpc (∼308 million light-years). It was discovered by German astronomer Heinrich d'Arrest on 27 August 1865.
One supernova has been observed in NGC 7363: SN 2023abdq (type II, mag. 18.69) was discovered by the Gaia Photometric Science Alerts on 22 December 2023.
NGC 7331 Group
According to A. M. Garcia, NGC 7363 is part of the five member NGC 7331 group (also known as LGG 459). The other galaxies in the group are: NGC 7320, NGC 7331, UGC 12082, and UGC 12060.
See also
List of NGC objects (7001–7840)
References
External links
7363
069580
+06-49-078
22409+3344
Pegasus (constellation)
Astronomical objects discovered in 1865
Discoveries by Heinrich Louis d'Arrest
Barred spiral galaxies | NGC 7363 | Astronomy | 241 |
44,847,815 | https://en.wikipedia.org/wiki/Newlight%20Technologies | Newlight Technologies is a company based in Huntington Beach, California, known for carbon sequestration into materials and products. The company is headquartered and manufactures in Huntington Beach, CA, and staffs over 200 employees.
History and corporate affairs
As of October 2020, Newlight Technologies has one facility located in Huntington Beach, California, which serves as its headquarters, R&D, operations, and manufacturing facility.
Technology
Currently, Newlight captures methane from a dairy farm in California. The methane is transported to a bioreactor. From there, the methane is mixed with air and interacts with enzymes to form a polymer trademarked as Aircarbon. According to Popular Science, the material performs similarly to most oil-based plastics but costs less to produce. Aircarbon has already been contracted for use in desk chairs, computer packaging, and smart phone cases. Newlight Technologies has also commercialized its own lines of carbon-negative eyewear and foodware, formerly known as Covalent and Restore.
Recognition
In 2014, AirCarbon was named Popular Science's Innovation of the Year, and in 2016, Aircarbon was awarded the Presidential Green Chemistry Challenge Award by the U.S. EPA.
References
Carbon capture and storage
Technology companies based in Greater Los Angeles
Companies based in Irvine, California
Renewable resource companies established in 2003
Technology companies established in 2003
2003 establishments in California
Methane
American companies established in 2003 | Newlight Technologies | Chemistry,Engineering | 278 |
38,676,345 | https://en.wikipedia.org/wiki/Frederick%20Mason%20Brewer | Frederick Mason Brewer CBE FRIC (1903 – 11 February 1963) was an English chemist. He was Head of the Inorganic Chemistry Laboratory at the University of Oxford and Mayor of Oxford during 1959–60.
Frederick Brewer was born in Kensal Rise (aka Kensal Green), Middlesex, England. He was the son of Frederick Charles Brewer and Ellen Maria Owen, both school teachers.
Brewer studied chemistry at Lincoln College, Oxford, from 1920, having received an open scholarship, and subsequently gained a first class degree.
After his undergraduate studies, Brewer undertook research with Prof. Frederick Soddy.
Between 1925–7, Brewer was a Commonwealth Fund Fellow at Cornell University in the United States. During 1927–8, he was a lecturer in physical chemistry at the University of Reading. In 1928, he became a demonstrator and lecturer at the University of Oxford Inorganic Chemistry Laboratory. He stayed in Oxford for the remainder of his life. He became attached to St Catherine's Society in the 1930s. In 1955, he was appointed Reader in Inorganic Chemistry. When St Catherine's Society became St Catherine's College in 1962, he was appointed a Fellow of the College.
In 1944, Brewer was elected as a university member on Oxford City Council. In 1959, he was elected Mayor of Oxford for 1959–60. In 1961, he was appointed an Alderman of the council.
Brewer lived at 6 Moreton Road in North Oxford. He was a Fellow of the Royal Institute of Chemistry and was awarded the honour of Commander of the Order of the British Empire (CBE) in 1963. However, a week after collecting his CBE at Buckingham Palace, at the age of 60, he died at the Radcliffe Infirmary in Oxford. He was married with a son and a daughter.
References
1903 births
1963 deaths
People from Kensal Green
Alumni of Lincoln College, Oxford
Cornell University fellows
Academics of the University of Reading
Fellows of St Catherine's College, Oxford
English chemists
Inorganic chemists
Mayors of Oxford
Commanders of the Order of the British Empire
Fellows of the Royal Institute of Chemistry | Frederick Mason Brewer | Chemistry | 412 |
37,726 | https://en.wikipedia.org/wiki/Octave | In music, an octave (: eighth) or perfect octave (sometimes called the diapason) is a series of eight notes occupying the interval between (and including) two notes, one having twice the frequency of vibration of the other. The octave relationship is a natural phenomenon that has been referred to as the "basic miracle of music", the use of which is "common in most musical systems". The interval between the first and second harmonics of the harmonic series is an octave. In Western music notation, notes separated by an octave (or multiple octaves) have the same name and are of the same pitch class.
To emphasize that it is one of the perfect intervals (including unison, perfect fourth, and perfect fifth), the octave is designated P8. Other interval qualities are also possible, though rare. The octave above or below an indicated note is sometimes abbreviated 8a or 8va (), 8va bassa (, sometimes also 8vb), or simply 8 for the octave in the direction indicated by placing this mark above or below the staff.
Explanation and definition
An octave is the interval between one musical pitch and another with double or half its frequency. For example, if one note has a frequency of 440 Hz, the note one octave above is at 880 Hz, and the note one octave below is at 220 Hz. The ratio of frequencies of two notes an octave apart is therefore 2:1. Further octaves of a note occur at times the frequency of that note (where n is an integer), such as 2, 4, 8, 16, etc. and the reciprocal of that series. For example, 55 Hz and 440 Hz are one and two octaves away from 110 Hz because they are (or ) and 4 (or ) times the frequency, respectively.
The number of octaves between two frequencies is given by the formula:
Music theory
Most musical scales are written so that they begin and end on notes that are an octave apart. For example, the C major scale is typically written (shown below), the initial and final Cs being an octave apart.
Because of octave equivalence, notes in a chord that are one or more octaves apart are said to be doubled (even if there are more than two notes in different octaves) in the chord. The word is also used to describe melodies played in parallel one or more octaves apart (see example under Equivalence, below).
While octaves commonly refer to the perfect octave (P8), the interval of an octave in music theory encompasses chromatic alterations within the pitch class, meaning that G to G (13 semitones higher) is an Augmented octave (A8), and G to G (11 semitones higher) is a diminished octave (d8). The use of such intervals is rare, as there is frequently a preferable enharmonically-equivalent notation available (minor ninth and major seventh respectively), but these categories of octaves must be acknowledged in any full understanding of the role and meaning of octaves more generally in music.
Notation
Octave of a pitch
Octaves are identified with various naming systems. Among the most common are the scientific, Helmholtz, organ pipe, and MIDI note systems. In scientific pitch notation, a specific octave is indicated by a numerical subscript number after note name. In this notation, middle C is C4, because of the note's position as the fourth C key on a standard 88-key piano keyboard, while the C an octave higher is C5.
{| style="border-spacing: 2px; border: 1px solid darkgray;"
! style="text-align: right" colspan="13" |
|-
! style="text-align: right" | Scientific
| style="text-align: center; width: 67px;" | C−1
| style="text-align: center; width: 67px;" | C0
| style="text-align: center; width: 67px;" | C1
| style="text-align: center; width: 52px;" | C2
| style="text-align: center; width: 52px;" | C3
| style="text-align: center; width: 55px;" | C4
| style="text-align: center; width: 55px;" | C5
| style="text-align: center; width: 55px;" | C6
| style="text-align: center; width: 67px;" | C7
| style="text-align: center; width: 67px;" | C8
| style="text-align: center; width: 67px;" | C9
| style="text-align: center; width: 25px;" |
|-
! style="text-align: right" | Helmholtz
| style="text-align: center" | C,,,
| style="text-align: center" | C,,
| style="text-align: center" | C,
| style="text-align: center" | C
| style="text-align: center" | c
| style="text-align: center" | c'
| style="text-align: center" | c''
| style="text-align: center" | c'''
| style="text-align: center" | c''''
| style="text-align: center" | c'''''
| style="text-align: center" | c''''''
|
|-
! style="text-align: right" | Organ
| style="text-align: center" | 64 Foot
| style="text-align: center" | 32 Foot
| style="text-align: center" | 16 Foot
| style="text-align: center" | 8 Foot
| style="text-align: center" | 4 Foot
| style="text-align: center" | 2 Foot
| style="text-align: center" | 1 Foot
| style="text-align: center" | 3 Line
| style="text-align: center" | 4 Line
| style="text-align: center" | 5 Line
| style="text-align: center" | 6 Line
|
|-
! style="text-align: right" | Name
| style="text-align: center" | Dbl Contra
| style="text-align: center" | Sub Contra
| style="text-align: center" | Contra
| style="text-align: center" | Great
| style="text-align: center" | Small
| style="text-align: center" | 1 Line
| style="text-align: center" | 2 Line
| style="text-align: center" | 3 Line
| style="text-align: center" | 4 Line
| style="text-align: center" | 5 Line
| style="text-align: center" | 6 Line
|
|-
! style="text-align: right" | MIDI Note
| style="text-align: center" | 0
| style="text-align: center" | 12
| style="text-align: center" | 24
| style="text-align: center" | 36
| style="text-align: center" | 48
| style="text-align: center" | 60
| style="text-align: center" | 72
| style="text-align: center" | 84
| style="text-align: center" | 96
| style="text-align: center" | 108
| style="text-align: center" | 120
|
|}
Ottava alta and bassa
The notation 8a or 8va is sometimes seen in sheet music, meaning "play this an octave higher than written" (all' ottava: "at the octave" or all' 8va). 8a or 8va stands for ottava, the Italian word for octave (or "eighth"); the octave above may be specified as ottava alta or ottava sopra). Sometimes 8va is used to tell the musician to play a passage an octave lower (when placed under rather than over the staff), though the similar notation 8vb (ottava bassa or ottava sotto) is also used. Similarly, 15ma (quindicesima) means "play two octaves higher than written" and 15mb (quindicesima bassa) means "play two octaves lower than written."
The abbreviations col 8, coll' 8, and c. 8va stand for coll'ottava, meaning "with the octave", i.e. to play the notes in the passage together with the notes in the notated octaves. Any of these directions can be cancelled with the word loco, but often a dashed line or bracket indicates the extent of the music affected.
Equivalence
After the unison, the octave is the simplest interval in music. The human ear tends to hear both notes as being essentially "the same", due to closely related harmonics. Notes separated by an octave "ring" together, adding a pleasing sound to music. The interval is so natural to humans that when men and women are asked to sing in unison, they typically sing in octave.
For this reason, notes an octave apart are given the same note name in the Western system of music notation—the name of a note an octave above A is also A. This is called octave equivalence, the assumption that pitches one or more octaves apart are musically equivalent in many ways, leading to the convention "that scales are uniquely defined by specifying the intervals within an octave". The conceptualization of pitch as having two dimensions, pitch height (absolute frequency) and pitch class (relative position within the octave), inherently include octave circularity. Thus all Cs (or all 1s, if C = 0), any number of octaves apart, are part of the same pitch class.
Octave equivalence is a part of most advanced musical cultures, but is far from universal in "primitive" and early music. The languages in which the oldest extant written documents on tuning are written, Sumerian and Akkadian, have no known word for "octave". However, it is believed that a set of cuneiform tablets that collectively describe the tuning of a nine-stringed instrument, believed to be a Babylonian lyre, describe tunings for seven of the strings, with indications to tune the remaining two strings an octave from two of the seven tuned strings. Leon Crickmore recently proposed that "The octave may not have been thought of as a unit in its own right, but rather by analogy like the first day of a new seven-day week".
Monkeys experience octave equivalence, and its biological basis apparently is an octave mapping of neurons in the auditory thalamus of the mammalian brain. Studies have also shown the perception of octave equivalence in rats, human infants, and musicians but not starlings, 4–9-year-old children, or non-musicians.
See also
One-third octave
References
Sources
External links
Anatomy of an Octave by Kyle Gann
Perfect intervals
0002:0001
Musical notes
Units of level | Octave | Physics,Mathematics | 2,383 |
7,757,190 | https://en.wikipedia.org/wiki/Food%20loss%20and%20waste | The causes of food going uneaten are numerous and occur throughout the food system, during production, processing, distribution, retail and food service sales, and consumption. Overall, about one-third of the world's food is thrown away. A similar amount is lost on top of that by feeding human-edible food to farm animals (the net effect wastes an estimated 1144 kcal/person/day). A 2021 meta-analysis, that did not include food lost during production, by the United Nations Environment Programme found that food waste was a challenge in all countries at all levels of economic development. The analysis estimated that global food waste was 931 million tonnes of food waste (about 121 kg per capita) across three sectors: 61 percent from households, 26 percent from food service and 13 percent from retail.
Food loss and waste is a major part of the impact of agriculture on climate change (it amounts to 3.3 billion tons of CO2e emissions annually) and other environmental issues, such as land use, water use and loss of biodiversity. Prevention of food waste is the highest priority, and when prevention is not possible, the food waste hierarchy ranks the food waste treatment options from preferred to least preferred based on their negative environmental impacts. Reuse pathways of surplus food intended for human consumption, such as food donation, is the next best strategy after prevention, followed by animal feed, recycling of nutrients and energy followed by the least preferred option, landfill, which is a major source of the greenhouse gas methane. Other considerations include unreclaimed phosphorus in food waste leading to further phosphate mining. Moreover, reducing food waste in all parts of the food system is an important part of reducing the environmental impact of agriculture, by reducing the total amount of water, land, and other resources used.
The UN's Sustainable Development Goal Target 12.3 seeks to "halve global per capita food waste at the retail and consumer levels and reduce food losses along production and supply chains, including post-harvest losses" by 2030. Climate change mitigation strategies prominently feature reducing food waste. In the 2022 United Nations Biodiversity Conference nations agree to reduce food waste by 50% by the year 2030.
Definition
Food loss and waste occurs at all stages of the food supply chain – production, processing, sales, and consumption. Definitions of what constitutes food loss versus food waste or what parts of foods (i.e., inedible parts) exit the food supply chain are considered lost or wasted vary. Terms are often defined on a situational basis (as is the case more generally with definitions of waste). Professional bodies, including international organizations, state governments, and secretariats may use their own definitions.
United Nations
The Food and Agriculture Organization (FAO) of the United Nations defines food loss and waste as the decrease in quantity or quality of food along the food supply chain. Within this framework, UN Agencies distinguish loss and waste at two different stages in the process:
Food loss occurs along the food supply chain from harvest/slaughter/catch up to, but not including, the sales level
Food waste occurs at the retail and consumption level.
Important components of this definition include:
Food redirected to nonfood chains (including animal feed, compost, or recovery to bioenergy) is not counted as food loss or waste. Inedible parts are not considered as food loss or waste (these inedible parts are sometimes referred to as unavoidable food waste)
Under Sustainable Development Goal 12, the Food and Agriculture Organization is responsible for measuring food loss, while the UN Environmental Program measures food waste.
The 2024 UNEP Food Waste Index Report, "Think Eat Save: Tracking Progress to Halve Global Food Waste," addresses the severe issue of food waste that accounts for US$1 trillion in losses, 8–10% of global greenhouse emissions, and the unnecessary use of 30% of the world's agricultural land, exacerbating hunger and affecting child growth. In alignment with SDG 12.3, the report compiles 194 data points from 93 countries to illustrate the widespread nature of food waste, highlights the lack of disparity in waste levels across nations of varying income levels, and underscores the leadership roles of Japan and the UK among G20 nations in data tracking. It argues for a comprehensive definition of food waste, including both edible and inedible parts, and calls for improved data collection, particularly in retail and food service sectors of low-income countries, to enhance global efforts in halving food waste by 2030, with an upcoming focus on public-private partnerships as a key strategy.
European Union
In the European Union (EU), food waste is defined by combining the definitions of food and waste, namely: "any substance or product, whether processed, partially processed or unprocessed, intended to be, or reasonably expected to be ingested by humans (...)" (including things such as drinks and chewing gum; excluding things such as feed, medicine, cosmetics, tobacco products, and narcotic or psychotropic substances) "which the holder discards or intends or is required to discard".
Previously, food waste was defined by directive 75/442/EEC as "any food substance, raw or cooked, which is discarded, or intended or required to be discarded" in 1975. In 2006, 75/442/EEC was repealed by 2006/12/EC, which defined waste as "any substance or object in the categories set out in Annex I which the holder discards or intends or is required to discard". Meanwhile, Article 2 of Regulation (EC) No. 178/2002 (the General Food Law Regulation), as amended on 1 July 2022, defined food as "any substance or product, whether processed, partially processed or unprocessed, intended to be, or reasonably expected to be ingested by humans (...)", including things such as drinks and chewing gum, excluding things such as feed, medicine, cosmetics, tobacco products, and narcotic or psychotropic substances.
A 2016 European Court of Auditors special report had criticised the lack of a common definition of food waste as hampering progress, and a May 2017 resolution by the European Parliament supported a legally binding definition of food waste. Finally, the 2018/851/EU directive of 30 May 2018 (the revised Waste Framework Directive) combined the two (after waste was redefined in 2008 by Article 3.1 of 2008/98/EC as "any substance or object which the holder discards or intends or is required to discard") by defining food waste as "all food as defined in Article 2 of Regulation (EC) No 178/2002 of the European Parliament and of the Council that has become waste."
United States
As of 2022, the United States Environmental Protection Agency (EPA) employed three categories:
"Excess food refers to food that is recovered and donated to feed people."
"Food waste refers to food such as plate waste (i.e., food that has been served but not eaten), spoiled food, or peels and rinds considered inedible that is sent to feed animals, to be composted or anaerobically digested, or to be landfilled or combusted with energy recovery."
"Food loss refers to unused product from the agricultural sector, such as unharvested crops."
In 2006, the EPA defined food waste as "uneaten food and food preparation wastes from residences and commercial establishments such as grocery stores, restaurants, produce stands, institutional cafeterias and kitchens, and industrial sources like employee lunchrooms".
The states remain free to define food waste differently for their purposes, though as of 2009, many had not done so.
Other definitions
Bellemare et al. (2017) compared four definitions from:
a Food and Agriculture Organization (FAO) 2016 report: "Food loss is defined as 'the decrease in quantity or quality of food.' Food waste is part of food loss and refers to discarding or alternative (nonfood) use of food that is safe and nutritious for human consumption along the entire food supply chain, from primary production to end household consumer level";
an Economic Research Service (ERS; a USDA agency) 2014 report: "Food loss represents the amount of food postharvest, that is available for human consumption but is not consumed for any reason. It includes cooking loss and natural shrinkage (for example, moisture loss); loss from mould, pests, or inadequate climate control; and food waste. Food waste is a component of food loss and occurs when an edible item goes unconsumed, as in food discarded by retailers due to color or appearance, and plate waste by consumers";
a FUSIONS (an EU project) 2016 report: "Food waste is any food, and inedible parts of food, removed from the food supply chain to be recovered or disposed (including composed [sic], crops ploughed in/not harvested, anaerobic digestion, bioenergy production, co-generation, incineration, disposal to sewer, landfill or discarded to sea)"; and
an EPA 2016 report: "The amount of food going to landfills from residences, commercial establishments (e.g., grocery stores and restaurants), institutional sources (e.g., school cafeterias), and industrial sources (e.g., factory lunchrooms). Pre-consumer food generated during the manufacturing and packaging of food products is not included in EPA's food waste estimates."
According to Bellemare et al., the inclusion of food that goes to nonfood productive use is flawed for two reasons: "First, if recovered food is used as an input, such as animal feed, fertilizer, or biomass to produce output, then by definition it is not wasted. However, there might be economic losses if the cost of recovered food is higher than the average cost of inputs in the alternative, nonfood use. Second, the definition creates practical problems for measuring food waste because the measurement requires tracking food loss in every stage of the supply chain and its proportion that flows to nonfood uses." They argued that only food that ends up in landfills should be counted as food waste, pointing to the 2016 EPA definition as a good example. Bellemare et al. also noted that "the FAO and ERS definitions only apply to edible and safe and nutritious food, whereas the definitions of FUSIONS and the EPA apply to both edible and inedible parts of food. Finally, the ERS and EPA definitions of food waste exclude the food that is not harvested at the farm level."
A 2019 FAO report stated:
Methodology
The 2019 FAO report stated: "Food loss and waste has typically been measured in physical terms using tonnes as reporting units. This measurement fails to account for the economic value of different commodities and can risk attributing a higher weight to low-value products just because they are heavier. [This] report acknowledges this by adopting a measure that accounts for the economic value of produce." Hall et al. (2009) calculated food waste in the United States in terms of energy value "by comparing the US food supply data with the calculated food consumed by the US population." The result was that food waste among American consumers increased from "about 30% of the available food supply in 1974 to almost 40% in recent years" (the early 2000s), or about 900 kcal per person per day (1974) to about 1400 kcal per person per day (2003). A 2012 Natural Resources Defense Council report interpreted this to mean that Americans threw away up to 40% of food that was safe to eat. Buzby & Hyman (2012) estimated both the total weight (in kg and lbs) and monetary value (in USD) of food loss in the United States, concluding that "the annual value of food loss is almost 10% of the average amount spent on food per consumer in 2008".
Net Animal Losses
Net animal losses are the difference between the calories in human-edible crops fed to animals and the calories returned in meat, dairy and fish. These losses are higher than all conventional food losses combined. This is because livestock eat more human-edible food than their products provide. Research estimated that if the US would eat all human-edible food instead of feeding it to animals in order to eat their meat, dairy and eggs, it would free up enough food for an additional 350 million people. At a global level livestock is fed an average of 1738 kcal/person/day of human-edible food, and just 594 kcal/p/d of animal products return to the human food supply, a net loss of 66%.
Sources
Production
In the United States, food loss can occur at most stages of the food industry and in significant amounts. In subsistence agriculture, the amounts of food loss are unknown, but are likely to be insignificant by comparison, due to the limited stages at which loss can occur, and given that food is grown for projected need as opposed to a global marketplace demand. Nevertheless, on-farm losses in storage in developing countries, particularly in African countries, can be high although the exact nature of such losses is much debated.
In the food industry of the United States, the food supply of which is the most diverse and abundant of any country in the world, loss occurs from the beginning of food production chain. From planting, crops can be subjected to pest infestations and severe weather, which cause losses before harvest. Since natural forces (e.g. temperature and precipitation) remain the primary drivers of crop growth, losses from these can be experienced by all forms of outdoor agriculture. On average, farms in the United States lose up to six billion pounds of crops every year because of these unpredictable conditions. According to the IPCC sixth assessment report, encouraging the development of technologies that address issues in food harvesting and post-harvesting could have a significant impact on decreasing food waste in the supply chain early-on.
The use of machinery in harvesting can cause losses, as harvesters may be unable to discern between ripe and immature crops, or collect only part of a crop. Economic factors, such as regulations and standards for quality and appearance, also cause food waste; farmers often harvest selectively via field gleaning, preferring to not waste crops "not to standards" in the field (where they can still be used as fertilizer or animal feed), since they would otherwise be discarded later. This method of removing undesirable produce from harvest collection, distribution sites and grocery stores is called culling. However, usually when culling occurs at the production, food processing, retail and consumption stages, it is to remove or dispose of produce with a strange or imperfect appearance rather than produce that is spoiled or unsafe to eat. In urban areas, fruit and nut trees often go unharvested because people either do not realize that the fruit is edible or they fear that it is contaminated, despite research which shows that urban fruit is safe to consume.
Food processing
Food loss continues in the post-harvest stage, but the amounts of post-harvest loss involved are relatively unknown and difficult to estimate. Regardless, the variety of factors that contribute to food loss, both biological/environmental and socio-economical, would limit the usefulness and reliability of general figures. In storage, considerable quantitative losses can be attributed to pests and micro-organisms. This is a particular problem for countries that experience a combination of heat (around 30 °C) and ambient humidity (between 70 and 90 per cent), as such conditions encourage the reproduction of insect pests and micro-organisms. Losses in the nutritional value, caloric value and edibility of crops, by extremes of temperature, humidity or the action of micro-organisms, also account for food waste. Further losses are generated in the handling of food and by shrinkage in weight or volume.
Some of the food loss produced by processing can be difficult to reduce without affecting the quality of the finished product. Food safety regulations are able to claim foods that contradict standards before they reach markets. Although this can conflict with efforts to reuse food loss (such as in animal feed), safety regulations are in place to ensure the health of the consumer; they are vitally important, especially in the processing of foodstuffs of animal origin (e.g. meat and dairy products), as contaminated products from these sources can lead to and are associated with microbiological and chemical hazards.
Retail
Packaging protects food from damage during its transportation from farms and factories via warehouses to retailing, as well as preserving its freshness upon arrival. Although it avoids considerable food waste, packaging can compromise efforts to reduce food waste in other ways, such as by contaminating waste that could be used for animal feedstocks with plastics.
In 2013, the nonprofit Natural Resources Defense Council (NRDC) performed research that suggests that the leading cause of food waste in America is due to uncertainty over food expiration dates, such as confusion in deciphering best-before, sell-by, or use-by dates. Joined by Harvard's Food Law and Policy Clinic, the NRDC produced a study called The Dating Game: How Confusing Food Date Labels Leads to Food Waste in America. This United States-based study looked at the intertwining laws which lead labeling to end up unclear and erratic. This uncertainty leads to consumers to toss food, most often because they think the food may be unsafe or misunderstand the labeling on the food completely. Lack of regulation on labeling can result in large quantities of food being removed from the market overall.
Retail stores throw away large quantities of food. Usually, this consists of items that have reached either their best-before, sell-by, or use-by dates. Some stores make an effort to markdown these goods with systems like discount stickers, stores have widely varying policies to handle the above mentioned foods. Much of the food discarded by stores is still edible. Some stores put efforts into preventing access to poor or homeless people, while others work with charitable organization to distribute food. Retailers also contribute to waste as a result of their contractual arrangements with suppliers. Failure to supply agreed quantities renders farmers or processors liable to have their contracts cancelled. As a consequence, they plan to produce more than actually required to meet the contract, to have a margin of error. Surplus production is often simply disposed of.
Retailers usually have strict cosmetic standards for produce, and if fruits or vegetables are misshapen or superficially bruised, they are often not put on the shelf. In the United States, some of the estimated six billion pounds of produce wasted each year are discarded because of appearance. The USDA publishes guidelines used as a baseline assessment by produce distributors, grocery stores, restaurants and other consumers in order to rate the quality of food. These guidelines and how they rate are readily available on their website. For example, apples get graded by their size, color, wax residue, firmness, and skin appearance. If apples rank highly in these categories and show close to no superficial defects, they are rated as "U.S. Extra Fancy" or "U.S. Fancy", these are the typical ratings sought out by grocery stores when purchasing their produce. Any apples with suboptimal levels of appearance are ranked as either "U.S. Number 1" or "Utility" and are not normally purchased for retail, as recommended by produce marketing sources, despite being safe and edible. A number of regional programs and organizations have been established by the EPA and USDA in an attempt to reduce such produce waste. Organizations in other countries, such as Good & Fugly in Australia and No Food Waste in India, are making similar efforts worldwide. The popular trend of selling "imperfect" produce at retail has been criticized for overlooking existing markets for these foods (eg the food processing industry and bargain grocery stores) and downplaying the household-level wasting of food that is statistically a larger part of the overall problem.
The fishing industry wastes substantial amounts of food: about 40–60% of fish caught in Europe is discarded as the wrong size or wrong species.
This comes to about 2.3 million tonnes per annum in the North Atlantic and the North Sea.
Food-service industry
Addressing food waste requires involving multiple stakeholders throughout the food supply chain, which is a market-driven system. Each stakeholder and their food waste quantification can be dependent on geographical scales. This geographical scale then results in the production of different definitions of food waste, as mentioned earlier, with respect to the complexities of food supply chains and then create a narrative that further shows the needs for specific research on important stakeholders. The food service industry suggests to be a key stakeholder to achieve mitigation. The key players within the food service industry include the manufacturers, producers, farmers, managers, employees, and consumers. The key factors relating to food waste in restaurants include the food menu, the production procedure, the use of pre-prepared versus whole food products, dinnerware size, type of ingredients used, the dishes served, opening hours, and disposal methods. These factors then can be categorized in the different stages of operations that relate to pre-kitchen, kitchen-based, and post-kitchen processes.
In restaurants in developing countries, the lack of infrastructure and associated technical and managerial skills in food production have been identified as the key drivers in the creation of food waste currently and in the future. Comparatively, the majority of food waste in developed countries tends to be produced post-consumer, which is driven by the low prices of food, greater disposable income, consumers' high expectations of food cosmetic standards, and the increasing disconnect between consumers and how food is being produced (Urbanization). That being said, in United States restaurants alone, an estimated 22 to 33 billion pounds are wasted each year.
Serving plate size reduction has been identified as an intervention effective at reducing restaurant food waste. Under such interventions, restaurants decrease the size of plates for meals provided to diners. Similar interventions which have been found to be effective at reducing restaurant food waste include utilizing reusable rather than disposable plates and decreasing serving size.
Food and agricultural nonprofits
Food and agriculture nonprofits (FANOs) are an understudied player in food system sustainability and food waste management (). FANOs play an essential role at every step of the food supply chain () including in creating or preventing food waste ). Food waste can be defined as edible food discarded by consumers. In FANOs when food safety practices are not employed, it can lead to food waste (). Reducing food waste is a priority in many FANOs. Still, due to an absence of food safety processes being implemented and a lack of food safety regulations, food waste is prevalent and compounded. Well-intentioned nonprofit staff and volunteers work with insufficient knowledge of how to safely handle and store food to prevent spoilage. FANOs have limited resources, like volunteer time and sporadic donations, and may not have the capacity to decipher complex, contradictory food safety regulations. However, FANOs play a vital role in getting nutritious food to needy, hungry people and families, so these nonprofits are responsible for being good stewards of their food stores to prevent waste and protect their client's health by distributing safe food. Thus, despite limited resources, FANOs should focus on volunteer training. Furthermore, nonprofit and food scientists can play an essential role in supporting FANOs through joint volunteer training design and evaluation.
Consumption
Consumers are directly and indirectly responsible for wasting a lot of food, which could for a large part be avoided if they were willing to accept suboptimal food (SOF) that deviates in sensory characteristics (odd shapes, discolorations) or has a best-before date that is approaching or has passed, but is still perfectly fine to eat. In addition to inedible and edible food waste generated by consumers, substantial amounts of food is wasted through food overconsumption, also referred to as metabolic food waste, estimated globally as 10% of foods reaching the consumer. Several interventions have been designed to achieve food waste reduction at the consumer level, such as reducing portion size and changing plates. However, despite being practical to some extent, these interventions can result in unintended consequences due to the lack of understanding of underlying causes and what influences consumers to act on specific behaviors. Unintended consequences, for example, could be prioritizing unhealthy food at the expense of healthy food or reduced consumption and calorie intake in general.
By sector
Fruit and vegetables
Grains
Fishing
In 2011, FAO estimated that up to 35 percent of global fisheries and aquaculture production is either lost or wasted every year.
Extent
Global extent
Efforts are underway by the Food and Agriculture Organization (FAO) and the United Nations Environment Programme (UNEP) to measure progress towards SDG Target 12.3 through two separate indices: the Food Loss Index (FLI) and the Food Waste Index (FWI).
According to FAO's The State of Food and Agriculture 2019, globally, in 2016, around 14 percent of the world's food is lost from production before reaching the retail level. Generally, levels of loss are higher for fruits and vegetables than for cereals and pulses. However, even for the latter, significant levels are found in sub-Saharan Africa and Eastern and South-Eastern Asia, while they are limited in Central and Southern Asia.
Estimates from UN Environment's Food Waste Index suggest that about 931 million tonnes of food, or 17 percent of total food available to consumers in 2019, went into the waste bins of households, retailers, restaurants and other food services.
According to a report from Feedback EU, the EU wastes 153 million tonnes of food each year, around double previous estimates.
Earlier estimates
In 2011, an FAO publication based on studies carried out by The Swedish Institute for Food and Biotechnology (SIK) found that the total of global amount of food loss and waste was around one third of the edible parts of food produced for human consumption, amounting to about per year. As the following table shows, industrialized and developing countries differ substantially. In developing countries, it is estimated that 400–500 calories per day per person are wasted, while in developed countries 1,500 calories per day per person are wasted. In the former, more than 40% of losses occur at the post-harvest and processing stages, while in the latter, more than 40% of losses occur at the retail and consumer levels. The total food waste by consumers in industrialized countries () is almost equal to the entire food production in sub-Saharan Africa ().
A 2013 report from the British Institution of Mechanical Engineers (IME) likewise estimated that 30–50% (or ) of all food produced remains uneaten.
Individual countries
Australia
Each year in New South Wales, more than 25 million meals are delivered by charity OzHarvest from food that would otherwise be wasted. Each year, the Australian economy loses $20 billion in food waste. This has a crucial environmental impact through the waste of resources used to produce, manufacture, package, and distribute that food.
In addition, it is estimated that 7.6 million tonnes of CO2 is generated by the disposed food in landfills. It is also the cause of odour, leaching, and potential generation for diseases. In March 2019, the Australian ministry of the environment shared the key findings of Australia's National food waste baseline, which will facilitate the tracking of the progress towards their goal to halve Australian food waste by 2030.
Many initiatives were taken by the Australian government in order to help achieve this goal. In fact, they financed $1.2 million in organization that invest in renewable energies systems to store and transport food. They also funded more than $10 million for research on food waste reduction. Local governments have also implemented programs such as information sessions on storing food and composting, diversion of waste from restaurants and cafes from landfills to shared recycling facilities and donation of food to organization that would otherwise be wasted.
Canada
In Canada, 58% of all food is wasted, amounting to 35.5 million tonnes of food per annuum. The value of this lost food is equivalent to CA$21 billion. Such quantities of food would be enough to feed all Canadians for five months. It is estimated that about one-third of this waste could be spared and sent to those in need. There are many factors that contribute to such large-scale waste. Manufacturing and processing food alone incur costs of CA$21 billion, or 4.82 million tons. Per household, it is estimated that $1,766 is lost in food loss and waste. The Government of Canada identifies three main factors contributing to household waste: (1) buying too much food and not eating it before it spoils, (2) malfunctioning or poorly-designed packaging that does not deter spoilage rates or contamination, and (3) improper disposing of food – using garbage bins instead of those intended for organic waste.
Canada, Mexico, and the United States are working together under the Commission for Environmental Cooperation in order to address the severe problem of food waste in North America.
Canada specifically is working in the following ways to reduce food waste:
Canada pledged to consult on strategies in the Strategy on Short-lived Climate Pollutants to reduce avoidable food waste within the country. This will help to reduce methane emissions from Canadian landfills.
The government has implemented a Food Policy for Canada , which is a movement towards a more sustainable food system.
In February 2019, the government brought together several experts from different sectors to share ideas and discuss opportunities for measuring and reducing food loss and waste across the food supply chain.
During the 2022 Quebec general election, Québec solidaire party spokesman Gabriel Nadeau-Dubois stated that ending food waste in Quebec would be a priority of the party if they were in government. The party seeks to cut food waste by 50% by mandating large businesses and institutions to give unsold food to groups that would distribute the food, or to businesses that would process the food.
China
In 2015 the Chinese Academy of Sciences reported that in big cities there was 17 to 18 million tons of food waste, enough to feed over 30 million people. About 25% of the waste was staple foods and about 18% from meat.
In August 2020 the Chinese Communist Party general secretary Xi Jinping said the amount of food waste was shocking and distressing. A local authority campaign "Operation empty plate" () was started to reduce waste, including encouraging food outlets to limit orders to one fewer main dish than the number of customers.
As of December 2020, a draft law is under consideration to penalise food outlets if they encourage or mislead customers to order excessive meals causing obvious waste, first with a warning and then fines of up to 10,000 yuan. It would allow restaurants to charge customers who leave excessive leftovers. Broadcasters-– radio, TV, or online – which produces publishes or disseminates the promotion of food waste, including overeating. who promote overeating or food waste could also be fined up to 100,000 yuan.
Denmark
According to Ministry of Environment (Denmark), over 700,000 tonnes per year of food is wasted every year in Denmark in the entire food value chain from farm to fork. Due to the work of activist Selina Juul's Stop Wasting Food movement, Denmark has achieved a national reduction in food waste by 25% in 5 years (2010–2015).
France
In France, approximately 1.3–1.9 million tonnes of food waste is produced every year, or between 20 and 30 kilograms per person per year. Out of the 10 million tonnes of food that is either lost or wasted in the country, 7.1 million tonnes of food wasted in the country, only 11% comes from supermarkets. Not only does this cost the French €16 billion per year, but also the negative impact on the environment is also shocking. In France, food waste emits 15.3 million tonnes of CO2, which represents 3% of the country's total CO2 emission. In response to this issue, in 2016, France became the first country in the world to pass a unanimous legislation that bans supermarkets from throwing away or destroying unsold food. Instead, supermarkets are expected to donate such food to charities and food banks. In addition to donating food, many businesses claim to prevent food waste by selling soon-to-be wasted products at discounted prices. The National Pact Against Food Waste in France has outlined eleven measures to achieve a food waste reduction by half by 2025.
Hungary
According to the research of the Hungarian national food waste prevention programme, Project Wasteless, hosted by the National Food Chain Safety Office, an average Hungarian consumer generated 68 kg food waste annually in 2016, and 49% of this amount could have been prevented (avoidable food waste). The research team replicated the study in 2019, According to the second measurement, food waste generated by the Hungarian households was estimated to be 65.5 kg per capita annually. Between the two periods, a 4% decrease was observed, despite significant economic expansion, likely due to the very intense media campaign of Project Wasteless. Covid-19 significantly affected the food waste behaviour of Hungarians: while the total food waste basically did not change, the edible (avoidable) and inedible (unavoidable) fractions show a particular transformation. Spending more time at home the discarded leftovers were reduced, resulting in a drop from 32 to 25 kg/capita/year in avoidable food waste, while home cooking became more prevalent, contributing to a significant rise in the amount of unavoidable food waste from 31 to 36kg/capita/year. The last measurement in 2022 reports 59.9kg/capita/year food waste production in the households, and the avoidable food waste part of it is 24 kg (40%). This indicates a reduction of 12% in total food waste and a reduction of 27% in avoidable food waste since the first measurement in 2016.
In 2021, The Hungarian Parliament passed a law dealing with food waste.
Italy
According to REDUCE project, which produced the first baseline dataset for Italy based on official EU methodological framework, food waste is 530 g per person per week at household stage (only edible fraction); food waste in school canteens corresponds to 586 g per pupil per week; retail food waste per capita, per year corresponds to 2.9 kg. See
Netherlands
According to Meeusen & Hagelaar (2008), between 30% and 50% of all food produced was estimated to be lost or thrown away at that time in the Netherlands, while a 2010 Agriculture Ministry (LNV) report stated that the Dutch population wasted 'at least 9.5m tonnes of food per year, worth at least €4.4bn.' In 2019, three studies into food waste in households in the Netherlands commissioned by the LNV were conducted, showing that the average household waste per capita had been reduced from 48 kilograms of "solid food (including dairy products, fats, sauces and soups)" in 2010, to 41.2 kilograms in 2016, to 34.3 kilograms in 2019. The waste of liquid foods (excluding beer and wine, first measured in 2019) that ended up in the sewer through sinks or toilets was analysed to have decreased from 57.3 litres per capita in 2010 to 45.5 litres in 2019.
New Zealand
Research done on household food waste in New Zealand found that larger households and households with more young people created more food waste. The average household in this case study put 40% of food waste into the rubbish.
Singapore
In Singapore, of food was wasted in 2014. Of that, were recycled. Since Singapore has limited agriculture ability, the country spent about S$14.8 billion (US$10.6 billion) on importing food in 2014. US$1.4 billion of it ends up being wasted, or 13 percent.
On January 1, 2020, Singapore implemented the Zero Waste Masterplan which aims to reduce Singapore's daily waste production by 30 percent. The project also aims to extend the lifespan of the Semaku Landfill, Singapore's only landfill, beyond 2025. As a direct result of the project, food waste dropped to 665,000 tonnes, showing a significant decrease from 2017's all-time high of 810,000 tonnes.
United Kingdom
In the UK, it was stated in 2007 that per year of wasted food (purchased and edible food which is discarded) amounted to a cost of £10.2 billion each year. This represented costs of £250 to £400 a year per household.
United States
According to United States Department of Agriculture (USDA), between 30–40 percent of food in the U.S. is wasted. Estimates of food waste in the United States range from 35 million tons to 103 million tons. In a study done by National Geographic in 2014, Elizabeth Royte indicated more than 30 percent of food in the United States, valued at $162 billion annually, is not eaten. The University of Arizona conducted a study in 2004 that indicated that 14% to 15% of United States edible food is untouched or unopened, amounting to $43 billion worth of discarded, but edible, food. In 2010, the United States Department of Agriculture came forth with estimations from the Economic Research Service that approximates food waste in the United States to be equivalent to 141 trillion calories.
USDA data from 2010 shows that 26% of fish, meat, and poultry were thrown away at the retail and consumer level. Since then, meat production has increased by more than 10%. Data scientist Harish Sethu says this means that billions of animals are raised and slaughtered only to end up in a landfill.
Impact on the environment
According to United Nations, about a third of all human-caused greenhouse gas emissions is linked to food. Empirical evidence at the global level on the environmental footprints for major commodity groups suggests that, if the aim is to reduce land use, the primary focus should be on meat and animal products, which account for 60 percent of the land footprint associated with food loss and waste. If the aim is to target water scarcity, cereals and pulses make the largest contribution (more than 70 percent), followed by fruits and vegetables. In terms of greenhouse gas (GHG) emissions associated with food loss and waste, the biggest contribution is again from cereals and pulses (more than 60 percent), followed by roots, tubers and oil-bearing crops. However, the environmental footprint for different commodities also varies across regions and countries, due, among other things, to differences in crop yields and production techniques. According to the IPCC 6th Assessment Report, the reduction of food waste would be beneficial for improving availability of resources such as "water, land-use, energy consumption" and the overall reduction of greenhouse gas emissions into the atmosphere.
Prevention and valorisation
In 2022 United Nations Biodiversity Conference nations adopted an agreement for preserving biodiversity, including a commitment to reduce food waste by 50% by the year 2030.
According to FAO's The State of Food and Agriculture 2019, the case for reducing food loss and waste includes gains that society can reap but which individual actors may not take into account, namely: (i) increased productivity and economic growth; (ii) improved food security and nutrition; and (iii) mitigation of environmental impacts of losing and wasting food, in particular terms of reducing greenhouse gas (GHG emissions as well as lowering pressure on land and water resources. The last two societal gains, in particular, are typically seen as externalities of reducing food loss and waste.
Response to the problem of food waste at all social levels has varied hugely, including campaigns from advisory and environmental groups, and concentrated media attention on the subject.
As suggested by the food waste hierarchy, prevention and reuse pathways for human consumption have the highest priority levels for food waste treatment. The general approach to food waste reduction comprise two main pathways: prevention and valorisation. Prevention of food waste infers all actions that reduce food production and ultimately prevent food from being produced in vain, such as food donations or re-processing into new food products. Valorisation on the other hand comprise actions that recover the materials, nutrients or energy in food waste, for instance by producing animal feed, fuel or energy from the "wastes" making it as potential resource.
Multiple studies have studied the environmental benefits of food waste prevention measures, including food donations, recovery of unharvested vegetables for re-use in food production, re-processing of surplus bread for beer production, and producing chutney or juice from leftovers. Food waste can also be used to produce multiple high-value products, such as a fish oil substitute for food or feed use via marine micro algae, without compromising the ability to produce energy via biogas. The general consensus currently suggest that reducing food waste by either prevention or valorisation, for human consumption, infers higher environmental benefits compared to the lower priority levels, such as energy production or disposal.
Multiple private enterprises have developed hardware and software solutions dealing mainly with the prevention of food waste within foodservice production facilities (contract catering, hotels & resorts, cruise ships, casinos etc.), by gathering quantitative and qualitative data about the specific food waste, helping chefs and managers reduce food waste by up to 70% by improving and optimizing their workflows and menus.
Food rescue
There are multiple initiatives that rescue food that would otherwise not be consumed by humans anymore. The food can come from supermarkets, restaurants or private households for example. Such initiatives are:
food banks,
online platforms like Too Good To Go and Olio,
public foodsharing shelves like those from foodsharing.de and
dumpster diving.
Consumer marketing
One way of dealing with food waste is to reduce its creation. Consumers can reduce spoilage by planning their food shopping, avoiding potentially wasteful spontaneous purchases, and storing foods properly (and also preventing a too large buildup of perishable stock). Widespread educational campaigns have been shown to be an effective way to reduce food waste.
A British campaign called "Love Food, Hate Waste" has raised awareness about preventative measures to address food waste for consumers. Through advertisements, information on food storage and preparation and in-store education, the UK observed a 21% decrease in avoidable household food waste over the course of 5 years.
Another potential solution is for "smart packaging" which would indicate when food is spoiled more precisely than expiration dates currently do, for example with temperature-sensitive ink, plastic that changes color when exposed to oxygen, or gels that change color with time.
An initiative in Curitiba, Brazil, called Cambio Verde allows farmers to provide surplus produce (produce they would otherwise discard due to too low prices) to people that bring glass and metal to recycling facilities (to encourage further waste reduction). In Europe, the Food Surplus Entrepreneurs Network (FSE Network), coordinates a network of social businesses and nonprofit initiatives with the goal to spread best practices to increase the use of surplus food and reduction of food waste.
An overarching consensus exists on the substantial environmental benefits of food waste reduction. However, rebound effects may cause substitutive consumption as a result of economic savings made from food waste prevention, potentially offsetting more than half of the avoided emissions (depending on the type of food and price elasticities involved).
Collection
In areas where the waste collection is a public function, food waste is usually managed by the same governmental organization as other waste collection. Most food waste is combined with general waste at the source. Separate collections, also known as source-separated organics, have the advantage that food waste can be disposed of in ways not applicable to other wastes. In the United States, companies find higher and better uses for large commercial generators of food and beverage waste.
From the end of the 19th century through the middle of the 20th century, many municipalities collected food waste (called "garbage" as opposed to "trash") separately. This was typically disinfected by steaming and fed to pigs, either on private farms or in municipal piggeries.
Separate curbside collection of food wastes is now being revived in some areas. To keep collection costs down and raise the rate of food waste segregation, some local authorities, especially in Europe, have introduced "alternate weekly collections" of biodegradable waste (including, e.g., garden waste), which enable a wider range of recyclable materials to be collected at reasonable cost, and improve their collection rates. However, they result in a two-week wait before the waste will be collected. The criticism is that particularly during hot weather, food waste rots and stinks, and attracts vermin. Waste container design is therefore essential to making such operations feasible. Curbside collection of food waste is also done in the U.S., some ways by combining food scraps and yard waste together. Several states in the U.S. have introduced a yard waste ban, not accepting leaves, brush, trimmings, etc. in landfills. Collection of food scraps and yard waste combined is then recycled and composted for reuse.
Disposal
As alternatives to landfill, food waste can be composted to produce soil and fertilizer, fed to animals or insects, or used to produce energy or fuel. Some wasted fruit parts, can also be biorefined to extract useful substances for the industry (i.e. succinic acid from orange peels, lycopene from tomato peels).
Landfills and greenhouse gases
Dumping food waste in a landfill causes odour as it decomposes, attracts flies and vermin, and has the potential to add biological oxygen demand (BOD) to the leachate. The European Union Landfill Directive and Waste Regulations, like regulations in other countries, enjoin diverting organic wastes away from landfill disposal for these reasons. Starting in 2015, organic waste from New York City restaurants will be banned from landfills.
In countries such as the United States and the United Kingdom, food scraps constitute around 19% of the waste buried in landfills, where it biodegrades very easily and produces methane, a powerful greenhouse gas.
Methane, or CH4, is the second most prevalent greenhouse gas that is released into the air, also produced by landfills in the U.S. Although methane spends less time in the atmosphere (12 years) than CO2, it is more efficient at trapping radiation. It is 25 times greater to impact climate change than CO2 in a 100-year period. Humans accounts over 60% of methane emissions globally.
Fodder and insect feed
Large quantities of fish, meat, dairy and grain are discarded at a global scale annually, when they can be used for things other than human consumption. The feeding of food scraps or slop to domesticated animals such as pigs or chickens is, historically, the most common way of dealing with household food waste. The animals turn roughly two thirds of their ingested food into gas or fecal waste, while the last third is digested and repurposed as meat or dairy products. There are also different ways of growing produce and feeding livestock that could ultimately reduce waste.
Bread and other cereal products discarded from the human food chain could be used to feed chickens. Chickens have traditionally been given mixtures of waste grains and milling by-products in a mixture called chicken scratch. As well, giving table scraps to backyard chickens is a large part of that movement's claim to sustainability, though not all backyard chicken growers recommend it. Ruminants and pigs have also been fed bakery waste for a long time.
Certain food waste (such as flesh) can also be used as feed in maggot farming. The maggots can then be fed to other animals. In China, some food waste is being processed by feeding it to cockroaches.
Composting
Food waste can be biodegraded by composting, and reused to fertilize soil. Composting is the aerobic process completed by microorganisms in which the bacteria break down the food waste into simpler organic materials that can then be used in soil. By redistributing nutrients and high microbial populations, compost reduces water runoff and soil erosion by enhancing rainfall penetration, which has been shown to reduce the loss of sediment, nutrients, and pesticide losses to streams by 75–95%.
Composting food waste leads to a decrease in the quantity of greenhouse gases released into the atmosphere. In landfills, organic food waste decomposes anaerobically, producing methane gas that is emitted into the atmosphere. When this biodegradable waste is composted, it decomposes aerobically and does not produce methane, but instead produces organic compost that can then be utilized in agriculture. Recently, the city of New York has begun to require that restaurants and food-producing companies begin to compost their leftover food. Another instance of composting progress is a Wisconsin-based company called WasteCap, who is dedicated towards aiding local communities create composting plans.
Municipal Food Waste (MFW) can be composted to create this product of organic fertilizer, and many municipalities choose to do this citing environmental protection and economic efficiency as reasoning. Transporting and dumping waste in landfills requires both money and room in the landfills that have very limited available space. One municipality who chose to regulate MFW is San Francisco, who requires citizens to separate compost from trash on their own, instituting fines for non-compliance at $100 for individual homes and $500 for businesses. The city's economic reasoning for this controversial mandate is supported by their estimate that one business can save up to $30,000 annually on garbage disposal costs with the implementation of the required composting.
Home composting
Composting is an economical and environmentally conscious step many homeowners could take to reduce their impact on landfill waste. Instead of food scraps and spoiled food taking up space in trashcans or stinking up the kitchen before the bag is full, it could be put outside and broken down by worms and added to garden beds.
There also exists an opportunity for increased home composting via social contagion, where people in a network can learn new behaviors such as home composting, and the new behavior can spread spontaneously through the group. If enough people are influenced, the community can reach a tipping point, in which a majority of people transition to a new habit; a 2018 study published in Nature claims that with only 25 per cent of a population, a minority perspective was able to overturn the majority.
Anaerobic digestion
Anaerobic digestion produces both useful gaseous products and a solid fibrous "compostable" material. Anaerobic digestion plants can provide energy from waste by burning the methane created from food and other organic wastes to generate electricity, defraying the plants' costs and reducing greenhouse gas emissions. The United States Environmental Protection Agency states that the use of anaerobic composting allows for large amounts of food waste to avoid the landfills. Instead of producing these greenhouse gasses into the environment from being in a landfill, the gasses can alternatively be harnessed in these facilities for reuse.
Since this process of composting produces high volumes of biogas, there are potential safety issues such as explosion and poisoning. These interactions require proper maintenance and personal protective equipment is utilized. Certain U.S. states, such as Oregon, have implemented the requirement for permits on such facilities, based on the potential danger to the population and surrounding environment.
Food waste coming through the sanitary sewers from garbage disposal units is treated along with other sewage and contributes to sludge.
Commercial liquid food waste
Commercially, food waste in the form of wastewater coming from commercial kitchens' sinks, dishwashers and floor drains is collected in holding tanks called grease interceptors to minimize flow to the sewer system. This often foul-smelling waste contains both organic and inorganic waste (chemical cleaners, etc.) and may also contain hazardous hydrogen sulfide gases. It is referred to as fats, oils, and grease (FOG) waste or more commonly "brown grease" (versus "yellow grease", which is fryer oil that is easily collected and processed into biodiesel) and is an overwhelming problem, especially in the US, for the aging sewer systems. Per the US EPA, sanitary sewer overflows also occur due to the improper discharge of FOGs to the collection system. Overflows discharge of untreated wastewater annually into local waterways, and up to 5,500 illnesses annually are due to exposure to contamination from sanitary sewer overflows into recreational waters.
See also
Anaerobic digestion
Gleaning
List of waste types
Post-harvest losses (grains)
Post-harvest losses (vegetables)
Source Separated Organics
Waste & Resources Action Programme
Waste management
Sources
References
Further reading
External links
NRDC page on food waste (advocacy site with suggestions)
Reduced Food Waste - Solution Summary Project Drawdown, 2020.
Food Waste and Rescue Report in Israel
low down food waste
Wasting Nothing, Project Regeneration, 2021.
Waste
Waste | Food loss and waste | Physics | 10,927 |
60,415,297 | https://en.wikipedia.org/wiki/Radial%20basis%20function%20interpolation | Radial basis function (RBF) interpolation is an advanced method in approximation theory for constructing high-order accurate interpolants of unstructured data, possibly in high-dimensional spaces. The interpolant takes the form of a weighted sum of radial basis functions. RBF interpolation is a mesh-free method, meaning the nodes (points in the domain) need not lie on a structured grid, and does not require the formation of a mesh. It is often spectrally accurate and stable for large numbers of nodes even in high dimensions.
Many interpolation methods can be used as the theoretical foundation of algorithms for approximating linear operators, and RBF interpolation is no exception. RBF interpolation has been used to approximate differential operators, integral operators, and surface differential operators.
Examples
Let and let be 15 equally spaced points on the interval . We will form where is a radial basis function, and choose such that ( interpolates at the chosen points). In matrix notation this can be written as
Choosing , the Gaussian, with a shape parameter of , we can then solve the matrix equation for the weights and plot the interpolant. Plotting the interpolating function below, we see that it is visually the same everywhere except near the left boundary (an example of Runge's phenomenon), where it is still a very close approximation. More precisely the maximum error is roughly at .
Motivation
The Mairhuber–Curtis theorem says that for any open set in with , and linearly independent functions on , there exists a set of points in the domain such that the interpolation matrix
is singular.
This means that if one wishes to have a general interpolation algorithm, one must choose the basis functions to depend on the interpolation points. In 1971, Rolland Hardy developed a method of interpolating scattered data using interpolants of the form . This is interpolation using a basis of shifted multiquadric functions, now more commonly written as , and is the first instance of radial basis function interpolation. It has been shown that the resulting interpolation matrix will always be non-singular. This does not violate the Mairhuber–Curtis theorem since the basis functions depend on the points of interpolation. Choosing a radial kernel such that the interpolation matrix is non-singular is exactly the definition of a strictly positive definite function. Such functions, including the Gaussian, inverse quadratic, and inverse multiquadric are often used as radial basis functions for this reason.
Shape-parameter tuning
Many radial basis functions have a parameter that controls their relative flatness or peakedness. This parameter is usually represented by the symbol with the function becoming increasingly flat as . For example, Rolland Hardy used the formula for the multiquadric, however nowadays the formula is used instead. These formulas are equivalent up to a scale factor. This factor is inconsequential since the basis vectors have the same span and the interpolation weights will compensate. By convention, the basis function is scaled such that as seen in the plots of the Gaussian functions and the bump functions.
A consequence of this choice is that the interpolation matrix approaches the identity matrix as leading to stability when solving the matrix system. The resulting interpolant will in general be a poor approximation to the function since it will be near zero everywhere, except near the interpolation points where it will sharply peakthe so-called "bed-of-nails interpolant" (as seen in the plot to the right).
On the opposite side of the spectrum, the condition number of the interpolation matrix will diverge to infinity as leading to ill-conditioning of the system. In practice, one chooses a shape parameter so that the interpolation matrix is "on the edge of ill-conditioning" (eg. with a condition number of roughly for double-precision floating point).
There are sometimes other factors to consider when choosing a shape-parameter. For example the bump function
has a compact support (it is zero everywhere except when ) leading to a sparse interpolation matrix.
Some radial basis functions such as the polyharmonic splines have no shape-parameter.
See also
Kriging
References
Numerical analysis
Approximation theory
Interpolation | Radial basis function interpolation | Mathematics | 875 |
1,107,596 | https://en.wikipedia.org/wiki/Linear%20approximation | In mathematics, a linear approximation is an approximation of a general function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order methods for solving or approximating solutions to equations.
Definition
Given a twice continuously differentiable function of one real variable, Taylor's theorem for the case states that
where is the remainder term. The linear approximation is obtained by dropping the remainder:
This is a good approximation when is close enough to since a curve, when closely observed, will begin to resemble a straight line. Therefore, the expression on the right-hand side is just the equation for the tangent line to the graph of at . For this reason, this process is also called the tangent line approximation. Linear approximations in this case are further improved when the second derivative of a, , is sufficiently small (close to zero) (i.e., at or near an inflection point).
If is concave down in the interval between and , the approximation will be an overestimate (since the derivative is decreasing in that interval). If is concave up, the approximation will be an underestimate.
Linear approximations for vector functions of a vector variable are obtained in the same way, with the derivative at a point replaced by the Jacobian matrix. For example, given a differentiable function with real values, one can approximate for close to by the formula
The right-hand side is the equation of the plane tangent to the graph of at
In the more general case of Banach spaces, one has
where is the Fréchet derivative of at .
Applications
Optics
Gaussian optics is a technique in geometrical optics that describes the behaviour of light rays in optical systems by using the paraxial approximation, in which only rays which make small angles with the optical axis of the system are considered. In this approximation, trigonometric functions can be expressed as linear functions of the angles. Gaussian optics applies to systems in which all the optical surfaces are either flat or are portions of a sphere. In this case, simple explicit formulae can be given for parameters of an imaging system such as focal distance, magnification and brightness, in terms of the geometrical shapes and material properties of the constituent elements.
Period of oscillation
The period of swing of a simple gravity pendulum depends on its length, the local strength of gravity, and to a small extent on the maximum angle that the pendulum swings away from vertical, , called the amplitude. It is independent of the mass of the bob. The true period T of a simple pendulum, the time taken for a complete cycle of an ideal simple gravity pendulum, can be written in several different forms (see pendulum), one example being the infinite series:
where L is the length of the pendulum and g is the local acceleration of gravity.
However, if one takes the linear approximation (i.e. if the amplitude is limited to small swings, ) the period is:
In the linear approximation, the period of swing is approximately the same for different size swings: that is, the period is independent of amplitude. This property, called isochronism, is the reason pendulums are so useful for timekeeping. Successive swings of the pendulum, even if changing in amplitude, take the same amount of time.
Electrical resistivity
The electrical resistivity of most materials changes with temperature. If the temperature T does not vary too much, a linear approximation is typically used:
where is called the temperature coefficient of resistivity, is a fixed reference temperature (usually room temperature), and is the resistivity at temperature . The parameter is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, is different for different reference temperatures. For this reason it is usual to specify the temperature that was measured at with a suffix, such as , and the relationship only holds in a range of temperatures around the reference. When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used.
See also
Binomial approximation
Euler's method
Finite differences
Finite difference methods
Newton's method
Power series
Taylor series
Notes
References
Further reading
Differential calculus
Numerical analysis
First order methods | Linear approximation | Mathematics | 858 |
30,881,248 | https://en.wikipedia.org/wiki/Curvilinear%20motion | The motion of an object moving in a curved path is called curvilinear motion.
Example: A stone thrown into the air at an angle.
Curvilinear motion describes the motion of a moving particles that conforms to a known or fixed curve. The study of such motion involves the use of two co-ordinate systems, the first being planar motion and the latter being cylindrical motion.
Planar motion
In planar motion, the velocity and acceleration components of the particle are always tangential and normal to the fixed curve. The velocity is always tangential to the curve and the acceleration can be broken up into both a tangential and normal component.
Cylindrical components
With cylindrical co-ordinates which are described as î and j, the motion is best described in polar form with components that resemble polar vectors. As with planar motion, the velocity is always tangential to the curve, but in this form acceleration consist of different intermediate components that can now run along the radius and its normal vector. This type of co-ordinate system is best used when the motion is restricted to the plane upon which it travels.
See also
Rectilinear motion
References
Motion (physics) | Curvilinear motion | Physics | 245 |
11,797,554 | https://en.wikipedia.org/wiki/Sclerotinia%20spermophila | Sclerotinia spermophila is a plant pathogen, infecting red clover, but can also be considered an animal pathogen.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Sclerotiniaceae
Fungi described in 1948
Fungus species | Sclerotinia spermophila | Biology | 54 |
155,430 | https://en.wikipedia.org/wiki/Kleene%20algebra | In mathematics, a Kleene algebra ( ; named after Stephen Cole Kleene) is an idempotent (and thus partially ordered) semiring endowed with a closure operator. It generalizes the operations known from regular expressions.
Definition
Various inequivalent definitions of Kleene algebras and related structures have been given in the literature. Here we will give the definition that seems to be the most common nowadays.
A Kleene algebra is a set A together with two binary operations + : A × A → A and · : A × A → A and one function * : A → A, written as a + b, ab and a* respectively, so that the following axioms are satisfied.
Associativity of + and ·: a + (b + c) = (a + b) + c and a(bc) = (ab)c for all a, b, c in A.
Commutativity of +: a + b = b + a for all a, b in A
Distributivity: a(b + c) = (ab) + (ac) and (b + c)a = (ba) + (ca) for all a, b, c in A
Identity elements for + and ·: There exists an element 0 in A such that for all a in A: a + 0 = 0 + a = a. There exists an element 1 in A such that for all a in A: a1 = 1a = a.
Annihilation by 0: a0 = 0a = 0 for all a in A.
The above axioms define a semiring. We further require:
+ is idempotent: a + a = a for all a in A.
It is now possible to define a partial order ≤ on A by setting a ≤ b if and only if a + b = b (or equivalently: a ≤ b if and only if there exists an x in A such that a + x = b; with any definition, a ≤ b ≤ a implies a = b). With this order we can formulate the last four axioms about the operation *:
1 + a(a*) ≤ a* for all a in A.
1 + (a*)a ≤ a* for all a in A.
if a and x are in A such that ax ≤ x, then a*x ≤ x
if a and x are in A such that xa ≤ x, then x(a*) ≤ x
Intuitively, one should think of a + b as the "union" or the "least upper bound" of a and b and of ab as some multiplication which is monotonic, in the sense that a ≤ b implies ax ≤ bx. The idea behind the star operator is a* = 1 + a + aa + aaa + ... From the standpoint of programming language theory, one may also interpret + as "choice", · as "sequencing" and * as "iteration".
Examples
Let Σ be a finite set (an "alphabet") and let A be the set of all regular expressions over Σ. We consider two such regular expressions equal if they describe the same language. Then A forms a Kleene algebra. In fact, this is a free Kleene algebra in the sense that any equation among regular expressions follows from the Kleene algebra axioms and is therefore valid in every Kleene algebra.
Again let Σ be an alphabet. Let A be the set of all regular languages over Σ (or the set of all context-free languages over Σ; or the set of all recursive languages over Σ; or the set of all languages over Σ). Then the union (written as +) and the concatenation (written as ·) of two elements of A again belong to A, and so does the Kleene star operation applied to any element of A. We obtain a Kleene algebra A with 0 being the empty set and 1 being the set that only contains the empty string.
Let M be a monoid with identity element e and let A be the set of all subsets of M. For two such subsets S and T, let S + T be the union of S and T and set ST = {st : s in S and t in T}. S* is defined as the submonoid of M generated by S, which can be described as {e} ∪ S ∪ SS ∪ SSS ∪ ... Then A forms a Kleene algebra with 0 being the empty set and 1 being {e}. An analogous construction can be performed for any small category.
The linear subspaces of a unital algebra over a field form a Kleene algebra. Given linear subspaces V and W, define V + W to be the sum of the two subspaces, and 0 to be the trivial subspace {0}. Define , the linear span of the product of vectors from V and W respectively. Define , the span of the unit of the algebra. The closure of V is the direct sum of all powers of V.
Suppose M is a set and A is the set of all binary relations on M. Taking + to be the union, · to be the composition and * to be the reflexive transitive closure, we obtain a Kleene algebra.
Every Boolean algebra with operations and turns into a Kleene algebra if we use for +, for · and set a* = 1 for all a.
A quite different Kleene algebra can be used to implement the Floyd–Warshall algorithm, computing the shortest path's length for every two vertices of a weighted directed graph, by Kleene's algorithm, computing a regular expression for every two states of a deterministic finite automaton.
Using the extended real number line, take a + b to be the minimum of a and b and ab to be the ordinary sum of a and b (with the sum of +∞ and −∞ being defined as +∞). a* is defined to be the real number zero for nonnegative a and −∞ for negative a. This is a Kleene algebra with zero element +∞ and one element the real number zero.
A weighted directed graph can then be considered as a deterministic finite automaton, with each transition labelled by its weight.
For any two graph nodes (automaton states), the regular expressions computed from Kleene's algorithm evaluates, in this particular Kleene algebra, to the shortest path length between the nodes.
Properties
Zero is the smallest element: 0 ≤ a for all a in A.
The sum a + b is the least upper bound of a and b: we have a ≤ a + b and b ≤ a + b and if x is an element of A with a ≤ x and b ≤ x, then a + b ≤ x. Similarly, a1 + ... + an is the least upper bound of the elements a1, ..., an.
Multiplication and addition are monotonic: if a ≤ b, then
a + x ≤ b + x,
ax ≤ bx, and
xa ≤ xb
for all x in A.
Regarding the star operation, we have
0* = 1 and 1* = 1,
a ≤ b implies a* ≤ b* (monotonicity),
an ≤ a* for every natural number n, where an is defined as n-fold multiplication of a,
(a*)(a*) = a*,
(a*)* = a*,
1 + a(a*) = a* = 1 + (a*)a,
ax = xb implies (a*)x = x(b*),
((ab)*)a = a((ba)*),
(a+b)* = a*(b(a*))*, and
pq = 1 = qp implies q(a*)p = (qap)*.
If A is a Kleene algebra and n is a natural number, then one can consider the set Mn(A) consisting of all n-by-n matrices with entries in A.
Using the ordinary notions of matrix addition and multiplication, one can define a unique *-operation so that Mn(A) becomes a Kleene algebra.
History
Kleene introduced regular expressions and gave some of their algebraic laws.
Although he didn't define Kleene algebras, he asked for a decision procedure for equivalence of regular expressions.
Redko proved that no finite set of equational axioms can characterize the algebra of regular languages.
Salomaa gave complete axiomatizations of this algebra, however depending on problematic inference rules.
The problem of providing a complete set of axioms, which would allow derivation of all equations among regular expressions, was intensively studied by John Horton Conway under the name of regular algebras, however, the bulk of his treatment was infinitary.
In 1981, Kozen gave a complete infinitary equational deductive system for the algebra of regular languages.
In 1994, he gave the above finite axiom system, which uses unconditional and conditional equalities (considering a ≤ b as an abbreviation for a + b = b), and is equationally complete for the algebra of regular languages, that is, two regular expressions a and b denote the same language only if a = b follows from the above axioms.
Generalization (or relation to other structures)
Kleene algebras are a particular case of closed semirings, also called quasi-regular semirings or Lehmann semirings, which are semirings in which every element has at least one quasi-inverse satisfying the equation: a* = aa* + 1 = a*a + 1. This quasi-inverse is not necessarily unique. In a Kleene algebra, a* is the least solution to the fixpoint equations: X = aX + 1 and X = Xa + 1.
Closed semirings and Kleene algebras appear in algebraic path problems, a generalization of the shortest path problem.
See also
Action algebra
Algebraic structure
Kleene star
Regular expression
Star semiring
Valuation algebra
References
Further reading
The introduction of this book reviews advances in the field of Kleene algebra made in the last 20 years, which are not discussed in the article above.
Algebraic structures
Algebraic logic
Formal languages
Many-valued logic | Kleene algebra | Mathematics | 2,152 |
44,518,760 | https://en.wikipedia.org/wiki/Depth%20%28video%20game%29 | Depth is a video game developed by Digital Confectioners and released for Microsoft Windows in 2014. It is an asymmetrical multiplayer first-person shooter that pits treasure hunting divers against sharks.
Gameplay
The game is a first-person shooter taking place in underwater environments. Players can either be divers or sharks. Divers escort and defend an automated submersible to collect sunken treasure, utilizing firearms, harpoons, explosives, and other equipment bought with collected treasure, while sharks, with different species having different abilities, "evolve" new abilities by killing and eating divers. Games are won by either side running out of respawns, by divers successfully escorting the submersible to an extraction point, or by sharks destroying the submersible.
Development
Depth began production in 2009, as a student project built as a mod for Unreal Tournament 3, by a small team led by Alex Quick of Killing Floor fame. Between 2010 and 2012, the game was ported to UDK and became a standalone game. However, development became stalled, due to concerns over gameplay. In 2013, Digital Confectioners partnered with the team to finish development. In 2016, Digital Confectioners bought out the project and have continued development on the title since.
Release
Depth was put on the Steam store as a pre-order on October 16, 2014, and released on November 3, 2014. "The Big Catch" update was released on December 16, 2014, adding 2 new shark classes, 1 new map and a new game type called "Megalodon Hunt."
Reception
Depth received moderately positive reviews from critics. IGN described Depth as having "an ocean of tense, unique gameplay moments," praising the game's level and sound design and describing playing as both a diver or a shark is a "fast, fun, and frantic experience." IGN criticized the game's lack of game modes and its "skimpy" customization options, saying that they "doubt the longevity of this otherwise ship-shape game." GameSpot is less favorable, noting low-quality textures, balancing issues and lack of memorable levels, describing the game as shallow. GameSpot did hope that future updates to clean up bugs and add new content could help the game reach its full potential.
Both Multiplayer and GameSpot stated that the game negatively draws from the gameplay established in the Left for Dead series.
Independent review site deviantrobot said that Depth is very similar to Evolve, though that "the game is unique enough that it stands on its own merits and it’s not just to fill the Evolve-shaped hole in your gaming library."
References
External links
Depth on Steam
2014 video games
Asymmetrical multiplayer video games
Indie games
First-person shooters
Windows games
Windows-only games
Fiction about shark attacks
Scuba diving video games
Video games about sharks
Video games developed in New Zealand | Depth (video game) | Physics | 577 |
21,698,103 | https://en.wikipedia.org/wiki/Crepant%20resolution | In algebraic geometry, a crepant resolution of a singularity is a resolution that does not affect the canonical class of the manifold. The term "crepant" was coined by by removing the prefix "dis" from the word "discrepant", to indicate that the resolutions have no discrepancy in the canonical class.
The crepant resolution conjecture of states that the orbifold cohomology of a Gorenstein orbifold is isomorphic to a semiclassical limit of the quantum cohomology of a crepant resolution.
In 2 dimensions, crepant resolutions of complex Gorenstein quotient singularities (du Val singularities) always exist and are unique, in 3 dimensions they exist but need not be unique as they can be related by flops, and in dimensions greater than 3 they need not exist.
A substitute for crepant resolutions which always exists is a terminal model. Namely, for every variety X over a field of characteristic zero such that X has canonical singularities (for example, rational Gorenstein singularities), there is a variety Y with Q-factorial terminal singularities and a birational projective morphism f: Y → X which is crepant in the sense that KY = f*KX.
Notes
References
Algebraic geometry
Singularity theory | Crepant resolution | Mathematics | 272 |
6,527,795 | https://en.wikipedia.org/wiki/Florida%20Space%20Research%20Institute | The Florida Space Research Institute (FSRI) was a statewide center for space research which was established by Florida's governor and the Florida legislature in 1999. The institute was created in an effort to increase collaboration between the academic, government, and private organizations with regard to aerospace. FSRI is closely involved with NASA's Centennial Challenges program, and signed a cooperative agreement with Kennedy Space Center in order to collaborate on the Advanced Learning Environment (ALE) initiative in 2001. FSRI also co-sponsored the NASA Spaceport Engineering Design Student Competition 2003 (NASA Spaceport 2003) along with the Florida Space Grant Consortium (FSGC).
Consolidation into Space Florida
With the Space Florida Act, enacted in May 2006, the Florida Legislature consolidated FSRI and two other organizations in order to create Space Florida
References
External links
FSRI Home
NASA groups, organizations, and centers
Research institutes in Florida
1999 establishments in Florida
Space technology research institutes
Aerospace research institutes | Florida Space Research Institute | Astronomy | 193 |
22,580,058 | https://en.wikipedia.org/wiki/Chaetopsis%20grisea | Chaetopsis grisea is a species of fungus in the genus Chaetopsis.
References
Ascomycota
Fungus species | Chaetopsis grisea | Biology | 28 |
10,685,970 | https://en.wikipedia.org/wiki/Haplogroup%20pre-JT | Haplogroup pre-JT is a human mitochondrial DNA haplogroup (mtDNA). It is also called R2'JT.
Origin
Haplogroup pre-JT is a descendant of the haplogroup R. It is characterised by the mutation T4216C. The pre-JT clade has two direct descendant lineages, haplogroup JT and haplogroup R2.
Distribution
According to YFull MTree, haplogroup R2'JT has allegedly been sequenced in at least three individuals, among whom one came from ancient Egypt and one from modern Denmark. However, Ian Logan mutationally interpreted the Denmark sample as being a member of T1a.
One carrier of haplogroup R2'JT was found in an in-depth study of "108 Scandinavian Neolithic individuals".
Subclades
Its major subclade is Haplogroup JT, which further divides into Haplogroup J and Haplogroup T. Its other subclade is Haplogroup R2, which has such branches as R2a, R2b, and R2c.
Tree
R2'JT
R2
JT
J
T
See also
Genealogical DNA test
Genetic genealogy
Human mitochondrial genetics
Population genetics
References
External links
Ian Logan's Mitochondrial DNA Site
JT | Haplogroup pre-JT | Chemistry,Biology | 280 |
31,410,624 | https://en.wikipedia.org/wiki/Schr%C3%B6dinger%2C%20Inc. | Schrödinger, Inc. is an international scientific software and biotechnology company that specializes in developing computational tools and software for drug discovery and materials science.
Schrödinger's software is used by pharmaceutical companies, biotech firms, and academic researchers to simulate and model the behavior of molecules at the atomic level. This accelerates the design and develops new drugs and materials more efficiently, reducing the time and cost of bringing them to market.
Schrödinger's software tools include molecular dynamics simulations, free energy calculations, quantum mechanics calculations, and virtual screening tools. The company also offers consulting services and collaborates with partners in the industry to advance the field of computational chemistry and drug discovery.
Products and services
Schrödinger's computational platforms evaluate compounds in silico, with experimental accuracy on properties such as binding affinity and solubility. The company's products include molecular modeling programs, and an Enterprise Informatics Platform named LiveDesign, which is intended to facilitate communication among interdisciplinary research teams.
In addition to computational platforms, Schrödinger develops custom software for enterprises, as well as training, computer-cluster design and implementation, and research-based drug discovery projects.
Schrödinger software licenses are available to academic institutions for education and not-for-profit research.
Partners
Schrödinger's partners include pharmaceutical companies, including Bayer, Takeda, and more.
Nimbus Therapeutics, co-founded by Schrödinger, uses Schrödinger's drug screening and design platform for drug discovery. In 2016, Nimbus Therapeutics sold an Acetyl-CoA carboxylase (ACC) inhibitor designed by Schrödinger to Gilead Sciences in a deal worth up to $1.2 billion. As of spring 2019 the ACC inhibitor was moving through late-stage clinical trials in non-alcoholic steatohepatitis.
Recognition
In November 2013, Schrödinger, in collaboration with Cycle Computing and the University of Southern California, set a record for the world's largest and fastest cloud computing run by using 156,000 cores on Amazon Web Services to screen over 205,000 molecules for materials science research. That work was a follow-up to a 2012 collaboration which saw Cycle Computing creating a 50,000 core virtual supercomputer using Amazon and Schrödinger's infrastructure; at that time, it was used to analyze 2.1 million compounds in 3 hours.
References
External links
American companies established in 1990
Software companies based in New York City
Molecular modelling software
Publicly traded companies based in New York City
Research support companies
Life sciences industry
Companies listed on the Nasdaq
2020 initial public offerings
Software companies of the United States
1990 establishments in the United States
1990 establishments in New York City
Software companies established in 1990 | Schrödinger, Inc. | Chemistry,Biology | 563 |
30,949,789 | https://en.wikipedia.org/wiki/Atlantic%20Mill | The Atlantic Mill was located on the east side of Redridge, Michigan near the Redridge Steel Dam. It was constructed in 1892 and closed in 1912. It was connected to the Atlantic mine via a 9 mile long Atlantic and Lake Superior Railroard. The previous path of the railroad is now a scenic tree-covered road. It is thought that currents have moved the stamp sand produced by this mill to the current site of the North Canal Township Park.
Old Atlantic mill
Prior to operating a mill in Redridge, the Atlantic Mining Company also operated a mill near Cole's Creek on Portage Lake. The mill moved because the federal government told the company their tailing must not fill the shipping channel. The old mill was located at approximately
See also
Copper mining in Michigan
Michigan Smelter
List of Copper Country mills
Notes
Metallurgical facilities in Michigan
Buildings and structures in Houghton County, Michigan | Atlantic Mill | Chemistry | 180 |
11,467,635 | https://en.wikipedia.org/wiki/Omphalia%20tralucida | Omphalia tralucida is a species of fungus in the family Tricholomataceae. First described scientifically by Donald Everett Bliss in 1938, it causes decline disease in the date palm (Phoenix dactylifera).
References
Fungi described in 1938
Fungi of North America
Fungal plant pathogens and diseases
Palm diseases
Food plant pathogens and diseases
Tricholomataceae
Fungus species | Omphalia tralucida | Biology | 81 |
58,977,586 | https://en.wikipedia.org/wiki/Diffuse%20design | Diffuse design refers to the designing capability of individuals who are not formally trained as designers. Drawing on the natural human ability to adopt a design approach, nonexpert designers bring diffuse design into the world via a combination of critical sense, creativity, and practical sense.
Diffuse design was coined by Italian design scholar Ezio Manzini and was a central theme of his 2015 book Design, When Everybody Designs. Manzini asserts that everybody is endowed with the ability to design, though not everyone is a competent designer and fewer still become professional designers. He also suggests it is the role of expert designers in social innovation contexts to improve the conditions by which different social actors can take part in co-design processes in a more expert fashion.
References
Design | Diffuse design | Engineering | 149 |
28,496,873 | https://en.wikipedia.org/wiki/LISE%2B%2B | The program LISE++ is designed to predict the intensity and purity of radioactive ion beams (RIB) produced by In-flight separators. LISE++ also facilitates the tuning of experiments where its results can be quickly compared to on-line data. The program is constantly expanding and evolving from the feedback of its users around the world.
Description
The aim of LISE++ is to simulate the production of RIBs via some type of nuclear reactions (several are available in the program), between a beam of stable isotopes and a target. The program simulates the characteristics of the nuclear reactions based on well-established models, as well as the effects of the filtering device located downstream of the target used to create the RIBs.
The LISE++ name is borrowed from the well known evolution of the C programming language, and is meant to indicate that the program is no longer limited to a fixed configuration like it was in the original “LISE” program, but can be configured to match any type of device or add to an existing device using the concept of modular blocks.
Many physical phenomena are incorporated in this program, from reaction mechanism models, cross section systematics, electron stripping models, energy loss models to beam optics, just to list a few. The references for the calculations are available within the program itself (see the various option windows) and the user is encouraged to consult them for detailed information. The interface and algorithms are designed to provide a user-friendly environment allowing easy adjustments of the input parameters and quick calculations.
Application
The ability to predict as well as identify on-line the composition of RIBs is of prime importance.
This has shaped the main functions of the program:
predict the fragment separator settings necessary to obtain a specific RIB;
predict the intensity and purity of the chosen RIB;
simulate identification plots for on-line comparison;
provide a highly user-friendly graphical environment;
allow configuration for different fragment separators.
The LISE++ package includes configuration files for most of the existing fragment and recoil separators found in the world (examples of fragment separators whose configurations are available in LISE++). Projectile fragmentation, fusion–evaporation, fusion–fission, Coulomb fission, abrasion–fission and two body nuclear reactions models are included in this program and can be used as the production reaction mechanism to simulate experiments at beam energies above the Coulomb barrier.
LISE++ can be used not only to forecast the yields and purities of radioactive beams, but also as an on-line tool for beam
identification and tuning during experiments. Large progress has
recently
been done
in ion-beam optics with the introduction of "elemental" blocks, that allows optical matrices
calculation within LISE++. New type of configurations based on these blocks allow a detailed analysis
of the transmission, useful for fragment separator design, and can be used for optics optimization based
on user constraints.
It can be configured to simulate the fragment separators of various research institutes by means of configuration files.
Utilities
Many “satellite” tools have been incorporated into the LISE++ framework, which are accessible with buttons on the main toolbar and include:
Physical calculator
Relativistic Kinematics calculator
Evaporation calculator
Radiation Residue Calculator
Units converter
ISOL catcher utility
Nuclide and Isomeric state Databases utilities
Units converter
Stripper foil lifetime utility
The program PACE4 (fusion-evaporation code) by A. Gavron et al.
Spectrometric calculator by J. Kantele
The program CHARGE (charge state distribution code) by Th. Stöhlker et al.
The program GLOBAL (charge-state distribution code) by W. E. Meyerhof et al.
The program BI (search for 2-dimensional peaks)
MOTER by H. A. Thiessen et al.: raytracing code with optimization capabilities operating under MS Windows
See also
Examples of Fragment separators at LISE++
A1900 @ NSCL/MSU (USA)
LISE @ GANIL (France)
FRS @ GSI (Germany)
BigRIPS & RIPS @ RIBF/RIKEN (Japan)
Accullina @ JINR (Russia)
Simulation programs used to calculate the transport of ion beams
MOCADI
Beam TRANSPORT code
COSY INFINITY
References
Physics software
Scientific simulation software | LISE++ | Physics | 890 |
3,056,987 | https://en.wikipedia.org/wiki/1%2C2%2C4-Trioxane | 1,2,4-Trioxane is one of the isomers of trioxane. It has the molecular formula CHO and consists of a six membered ring with three carbon atoms and three oxygen atoms. The two adjacent oxygen atoms form a peroxide functional group and the other forms an ether functional group. It is like a cyclic acetal but with one of the oxygen atoms in the acetal group being replaced by a peroxide group.
1,2,4-Trioxane itself has not been isolated or characterized, but rather only studied computationally. However, it constitutes an important structural element of some more complex organic compounds. The natural compound artemisinin, isolated from the sweet wormwood plant (Artemisia annua), and some semi-synthetic derivatives are important antimalarial drugs containing the 1,2,4-trioxane ring. Completely synthetic analogs containing the 1,2,4-trioxane ring are important potential improvements over the naturally derived artemisinins. The peroxide group in the 1,2,4-trioxane core of artemisinin is cleaved in the presence of the malaria parasite leading to reactive oxygen radicals that are damaging to the parasite.
References
Organic peroxides
Trioxanes
Hypothetical chemical compounds | 1,2,4-Trioxane | Chemistry | 260 |
10,087,164 | https://en.wikipedia.org/wiki/Video%20Content%20Protection%20System | The Video Content Protection System (VCPS) is a standard for digital rights management, intended to enforce protection or DVD+R/+RW content and related media.
It was designed to protect video recordings broadcast terrestrially with the broadcast flag used for
digital high-definition programming, but its use has been expanded to
cover programming obtained in other ways, such as via cable and satellite delivery. This standard is promoted by Philips and is included in latest Scsi MMC-6 specification.
The system makes use of three different classes of encryption key, one type stored on the media
in a "Disc Key Block", one stored in player software, and one in any hardware device
that will be used to play (and hence decrypt) the media.
HP and Phillips Proposal
Hewlett-Packard and Philips have "discussed how they are trying to develop a content-protection system for DVDs, designed to protect users from burning 'protected' DTV broadcasts." Existing DVD players would not be able to read DVDs which incorporate the technology.
References
External links
Philips VCPS page, includes VCPS specification
SCSI MultiMedia Command Set - 6 (MMC-6), includes VCPS-specific commands
Compact Disc and DVD copy protection
Digital rights management standards
High-definition television
Philips | Video Content Protection System | Technology | 257 |
2,202,712 | https://en.wikipedia.org/wiki/Paromomycin | Paromomycin is an antimicrobial used to treat a number of parasitic infections including amebiasis, giardiasis, leishmaniasis, and tapeworm infection. It is a first-line treatment for amebiasis or giardiasis during pregnancy. Otherwise, it is generally a second line treatment option. It is taken by mouth, applied to the skin, or by injection into a muscle.
Common side effects when taken by mouth include loss of appetite, vomiting, abdominal pain, and diarrhea. When applied to the skin side effects include itchiness, redness, and blisters. When given by injection there may be fever, liver problems, or hearing loss. Use during breastfeeding appears to be safe. Paromomycin is in the aminoglycoside family of medications and causes microbe death by stopping the creation of bacterial proteins.
Paromomycin was discovered in the 1950s from a type of streptomyces and came into medical use in 1960. It is on the World Health Organization's List of Essential Medicines. Paromomycin is available as a generic medication.
Medical uses
It is an antimicrobial used to treat intestinal parasitic infections such as cryptosporidiosis and amoebiasis, and other diseases such as leishmaniasis.
Paromomycin was demonstrated to be effective against cutaneous leishmaniasis in clinical studies in the USSR in the 1960s, and in trials with visceral leishmaniasis in the early 1990s.
The route of administration is intramuscular injection and capsule.
Paromomycin topical cream with or without gentamicin is an effective treatment for ulcerative cutaneous leishmaniasis, according to the results of a phase-3, randomized, double-blind, parallel group–controlled trial.
Pregnancy and breastfeeding
The medication is poorly absorbed. The effect it may have on the baby is still unknown.
There is limited data regarding the safety of taking paromomycin while breastfeeding but because the drug is poorly absorbed minimal amounts of drug will be secreted in breastmilk.
HIV/AIDS
There is limited evidence that paromomycin can be used in persons coinfected with HIV and Cryptosporidium. A few small trials have showed a reduction in oocyst shedding after treatment with paromomycin.
Adverse effects
The most common adverse effects associated with paromomycin sulfate are abdominal cramps, diarrhea, heartburn, nausea, and vomiting. Long-term use of paromomycin increases the risk for bacterial or fungal infection. Signs of overgrowth include white patches in the oral cavities. Other less common adverse events include myasthenia gravis, kidney damage, enterocolitis, malabsorption syndrome, eosinophilia, headache, hearing loss, ringing in the ear, itching, severe dizziness, and pancreatitis.
Interactions
Paromomycin belongs to the aminoglycoside drug class and therefore are toxic to the kidneys and to ears. These toxicities are additive and are more likely to occur when used with other drugs that cause ear and kidney toxicity. Concurrent use of foscarnet increases the risk of kidney toxicity. Concurrent use of colistimethate and paromomycin can cause a dangerous slowing of breathing known as respiratory depression, and should be done with extreme caution if necessary. When used with systemic antibiotics such as paromomycin, the cholera vaccine can cause an immune response. Use with strong diuretics, which can also harm hearing, should be avoided. Paromomycin may have dangerous reactions when used with the paralytic succinylcholine by increasing its neuromuscular effects.
There are no known food or drink interactions with paromomycin.
Mechanism
Paromomycin is a protein synthesis inhibitor in nonresistant cells by binding to 16S ribosomal RNA. This broad-spectrum antibiotic soluble in water, is very similar in action to neomycin. Antimicrobial activity of paromomycin against Escherichia coli and Staphylococcus aureus has been shown. Paromomycin works as an antibiotic by increasing the error rate in ribosomal translation. Paromomycin binds to a RNA loop, where residues A1492 and A1493 are usually stacked, and expels these two residues. These two residues are involved in detection of correct Watson-Crick pairing between the codon and anti codon. When correct interactions are achieved, the binding provides energy to expel the two residues. Paromomycin binding provides enough energy for residue expulsion and thus results in the ribosome incorporating the incorrect amino acid into the nascent peptide chain. Recent real-time measurements of aminoglycoside effects on protein synthesis in live E. coli cells found that paromomycin's interference with protein synthesis is not only due to the misreading of mRNA but also due to a significant reduction in the overall protein elongation rate, suggesting a more comprehensive inhibition of protein synthesis.
Pharmacokinetics
Absorption
GI absorption is poor. Any obstructions or factors which impair GI motility may increase the absorption of the drug from the digestive tract. In addition, any structural damage, such as lesions or ulcerations, will tend to increase drug absorption.
For intramuscular (IM) injection, the absorption is rapid. Paromomycin will reach peak plasma concentration within one hour following IM injection. The in-vitro and in-vivo activities parallel those of neomycin.
Elimination
Almost 100% of the oral dose is eliminated unchanged via feces. Any absorbed drug will be excreted in urine.
History
Paromomycin was discovered in the 1950s amongst the secondary metabolites of a variety of Streptomyces then known as Streptomyces krestomuceticus, now known as Streptomyces rimosus. It came into medical use in 1960.
References
External links
Aminoglycoside antibiotics
Antiprotozoal agents
Orphan drugs
Drugs developed by Pfizer
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Antiparasitic agents | Paromomycin | Biology | 1,279 |
46,775,890 | https://en.wikipedia.org/wiki/Tip-enhanced%20Raman%20spectroscopy | Tip-enhanced Raman spectroscopy (TERS) is a variant of surface-enhanced Raman spectroscopy (SERS) that combines scanning probe microscopy with Raman spectroscopy. High spatial resolution chemical imaging is possible via TERS, with routine demonstrations of nanometer spatial resolution under ambient laboratory conditions, or better at ultralow temperatures and high pressure.
The maximum resolution achievable using an optical microscope, including Raman microscopes, is limited by the Abbe limit, which is approximately half the wavelength of the incident light. Furthermore, with SERS spectroscopy the signal obtained is the sum of a relatively large number of molecules. TERS overcomes these limitations as the Raman spectrum obtained originates primarily from the molecules within a few tens of nanometers of the tip.
Although the antennas' electric near-field distributions are commonly understood to determine the spatial resolution, recent experiments showing subnanometer-resolved optical images put this understanding into question. This is because such images enter a regime in which classical electrodynamical descriptions might no longer be applicable and quantum plasmonic and atomistic effects could become relevant.
History
The earliest reports of tip enhanced Raman spectroscopy typically used a Raman microscope coupled with an atomic force microscope. Tip-enhanced Raman spectroscopy coupled with a scanning tunneling microscope (STM-TERS) has also become a reliable technique, since it utilizes the gap mode plasmon between the metallic probe and the metallic substrate.
Equipment
Tip-enhanced Raman spectroscopy requires a confocal microscope, and a scanning probe microscope. The optical microscope is used to align the laser focal point with the tip coated with a SERS active metal. The three typical experimental configurations are bottom illumination, side illumination, and top illumination, depending on which direction the incident laser propagates towards the sample, with respect to the substrate. In the case of STM-TERS, only side and top illumination configurations can be applied, since the substrate is required to be conductive, therefore typically being non-transparent. In this case, the incident laser is usually linearly polarized and aligned parallel to the tip, in order to generate confined surface plasmon at the tip apex. The sample is moved rather than the tip so that the laser remains focused on the tip. The sample can be moved systematically to build up a series of tip enhanced Raman spectra from which a Raman map of the surface can be built allowing for surface heterogeneity to be assessed with up to 1.7 nm resolution. Subnanometer resolution has been demonstrated in certain cases allowing for submolecular features to be resolved.
In 2019, Yan group and Liu group at University of California, Riverside developed a lens-free nanofocusing technique, which concentrates the incident light from a tapered optical fiber to the tip apex of a metallic nanowire and collects the Raman signal through the same optical fiber. Fiber-in-fiber-out NSOM-TERS has been developed.
Applications
Several research have used TERS to image single atoms and the internal structure of the molecules. In 2019, the Ara Apkarian group at the Center for Chemistry at the Space-Time Limit, University of California, Irvine imaged vibrational normal modes of single porphyrin molecules using TERS. TERS-based DNA sequencing has also been demonstrated.
References
Raman scattering
Raman spectroscopy
Surface science
Plasmonics | Tip-enhanced Raman spectroscopy | Physics,Chemistry,Materials_science | 689 |
75,880,175 | https://en.wikipedia.org/wiki/Kenneth%20Tew | Kenneth D. Tew is a Scottish-American pharmacologist, academic and author. He is a professor in the Department of Cell & Molecular Pharmacology and the John C. West Endowed Chair in Cancer Research at the Medical University of South Carolina.
Tew's research primarily focuses on identifying cancer strategies with strong translational potential, particularly in the context of redox pathways, and resistance to various drugs to understand redox mechanisms and their connections to essential signaling pathways. He has authored, co-authored and edited research articles and books such as Preclinical and Clinical Modulation of Anticancer Drugs and Basic Science of Cancer. He is the recipient of the Outstanding Investigator Grant from the National Cancer Institute in 1993, the 2003 American Cancer Society Scientific Research Award and the 2010 Astellas USA Foundation Award from the American Society for Pharmacology and Experimental Therapeutics.
Tew is an Elected Fellow of the American Association for the Advancement of Science and the American Society for Pharmacology and Experimental Therapeutics. He is an Executive Editor of Biomedicine & Pharmacotherapy.
Education and early career
Tew earned a Bachelor of Science in Microbiology/Genetics from the University of Wales, Swansea in 1973 and a PhD in Biochemical Pharmacology from the University of London, where he also received postdoctoral training in 1976. He served as the Head of the Basic Pharmacology Program at the Lombardi Cancer Center from 1982 to 1985, when he became a member and later Chairman of Pharmacology at the Fox Chase Cancer Center. Concurrently, he worked as an Adjunct Associate Professor of Pharmacology at the University of Pennsylvania until 1990 and was awarded his DSc from the University of London in 1995.
Career
Tew was appointed the G. Willing Chair in Cancer Research at the Medical University of South Carolina from 1999 to 2004. He was the Director of the Developmental Cancer Therapeutics Program at Hollings Cancer Center from 2004 to 2019, and serves as a professor in the Department of Cell & Molecular Pharmacology at the Medical University of South Carolina.
Tew has been the John C. West Chair in Cancer Research at the Medical University of South Carolina since 2004.
Tew held the position of Associate Editor from 1993 to 2007 and later assumed the role of Senior Editor in the Experimental Therapeutics, Molecular Targets, and Chemical Biology Section from 2007 to 2018 for the journal Cancer Research. Concurrently, he held editorial positions including, Editor for Cellular Pharmacology, and Editor-in-Chief of Journal of Pharmacology and Experimental Therapeutics.
Tew has been the Editor (USA) of Biomedicine & Pharmacotherapy since 2002 and Serial Editor for Advances in Cancer Research since 2011. Additionally, he has held appointments at InVaMet Therapeutics and the Greehey Children's Cancer Research Institute Scientific External Advisory Board since 2019.
Research
Through his research laboratory, the Tew laboratory, he has conducted research in redox pathways, with an emphasis on drug development, biomarker identification, and comprehending the effects of reactive oxygen and nitrogen species on cancer cells. He has focused on distinct post-translationally modified S-glutathionylated proteins affecting cell-signaling pathways, potentially acting as surrogate plasma biomarkers for drug response induced by oxidative and nitrosative stress. He holds patents for his work, contributing to the development of a glutathione S-transferase-activated prodrug and two small molecules in clinical development as potential myeloproliferative agents.
Works
Tew has co-authored 2 books focusing on carcinogenesis and cancer treatment strategies. He co-wrote Preclinical and Clinical Modulation of Anticancer Drugs with Peter J. Houghton and Janet A. Houghton, providing an analysis of theoretical and practical approaches to the design and implementation of modulation principles. His collaborative work with Gary D. Kruh, Basic Science of Cancer, explored the advancements in cancer research, covering interrelated topics such as tumor suppressor genes, apoptosis, transcriptional regulation, pharmacology of anticancer drugs, cytogenetic techniques, oncogenes, and signal transductions.
Tew co-edited books from the series Advances in Cancer Research alongside Paul B. Fisher, where they provided reviews on diverse cancer research topics. In a review published in the Journal of Medicinal Chemistry, Thomas J. Bardos wrote about the series, "This rapidly growing series of volumes containing many excellent, highly informative, in-depth reviews on a variety of timely topics relating to cancer research has always been most representative in the areas of tumor biology and immunology."
Drug development
Tew's work on redox and pharmacogenetics focused on the discovery and development of drugs. Alongside colleagues, he introduced a novel zebrafish model with a glutathione S-transferase π1 (gstp1) knockout, revealing insights into redox homeostasis, reductive stress, and responses to drugs inducing endoplasmic reticulum stress and the unfolded protein response. His research has looked into the role of GTSP in cellular redox homeostasis and its over-expression in cancer drug resistance, particularly in the context of preclinical and clinical testing of the GSTP inhibitor TLK199 (Telintra) for treating myelodysplastic syndrome. He further revealed that the absence of microsomal glutathione transferase 1 (MGST1) impacts melanin biosynthesis and melanoma growth in mice and that, in numerous species, members of the GST family are involved in early hematopoiesis, and that the lack of GSTP in dendritic cells leads to increased proliferation, ROS levels and ERα levels, suggesting a role for GSTP in controlling ERα activity and dendritic cell function.
Additionally, Tew and colleagues investigated how S-glutathionylation of the protein BiP, mediated by GSTP, contributes to acquired resistance to the multiple myeloma treatment bortezomib (Btz) by impacting BiP's foldase and ATPase activities. In another collaborative study published in Scientific Reports, he found that S-glutathionylated serpins, specifically A1 and A3, are elevated in the blood of prostate cancer patients after radiation therapy, suggesting their potential as biomarkers for radiation exposure. He also explored melanoma cell lines resistant to reductive stress agents, showcasing changes in cell and mitochondrial morphology, metabolic preferences, and adaptive mechanisms in lethal reductive stress conditions.
Cancer treatment strategies
Tew has studied cancer strategies to devise new treatments. In a joint research, he highlighted the significance of microsomal glutathione transferase 1 (MGST1) in melanin biosynthetic pathways, revealing its role as a determinant of tumor progression, with MGST1 knockdown leading to depigmentation, increased oxidative stress, and hindered tumor growth. He also determined that inhibiting microsomal glutathione S-transferase 1 (MGST1) in melanoma enhances oxidative stress, increases sensitivity to anticancer drugs, and reduces metastasis, improving the effectiveness of therapies.
Tew examined ME-344, a second-generation isoflavone with anticancer properties in 2019, demonstrating its impact on redox homeostasis, mitochondrial function, and specific targeting of heme oxygenase 1 (HO-1) in lung cancer cells. In 2020, he determined that ME-344 targets VDAC1 and VDAC2 in lung cancer cells, leading to ROS generation, Bax translocation, cytochrome c release, and apoptosis, highlighting their potential as therapeutic targets. He also assessed how reactive oxygen species (ROS) play a dual role in cancer evolution, influencing both tumorigenesis and cell death, and highlighted tumor cell adaptations in metabolism and antioxidant defenses to manage ROS levels during different stages of cancer development.
Awards and honors
1993 – Outstanding Investigator Grant, National Cancer Institute
2003 – Research Award, American Cancer Society Scientific
2010 – Astellas USA Foundation Award, American Society for Pharmacology and Experimental Therapeutics
Bibliography
Selected books
Mechanisms of Drug Resistance in Neoplastic Cells (1988) ISBN 9780127633626
Preclinical and Clinical Modulation of Anticancer Drugs (1993) ISBN 9780849372919
Basic Science of Cancer (2000) ISBN 9781468484397
Advances in Cancer Research (2014) ISBN 9780124071902
Selected articles
Adler, V., Yin, Z., Fuchs, S. Y., Benezra, M., Rosario, L., Tew, K. D., ... & Ronai, Z. E. (1999). Regulation of JNK signaling by GSTp. The EMBO journal, 18(5), 1321–1334.
Townsend, D. M., Tew, K. D., & Tapiero, H. (2003). The importance of glutathione in human disease. Biomedicine & pharmacotherapy, 57(3-4), 145–155.
Townsend, D. M., & Tew, K. D. (2003). The role of glutathione-S-transferase in anti-cancer drug resistance. Oncogene, 22(47), 7369–7375.
Tapiero, H., & Tew, K. D. (2003). Trace elements in human physiology and pathology: zinc and metallothioneins. Biomedicine & Pharmacotherapy, 57(9), 399–411.
Hayes, J. D., Dinkova-Kostova, A. T., & Tew, K. D. (2020). Oxidative stress in cancer. Cancer cell, 38(2), 167–197.
Tew, K.D. Alkylating Agents. In: Principles & Practice of Oncology. Eds. DeVita, Hellman & Rosenberg. pp246–256, 2018.
Tew, K.D. Protein S-Glutathionylation & Glutathione S-transferase P. In: Glutathione. Editor: Leopold Flohé. CRC Press. Chapter 12, 201–214, 2018.
References
Pharmacologists
Alumni of Swansea University
Alumni of the University of London
Medical University of South Carolina faculty
Fellows of the American Association for the Advancement of Science
Fellows of the American Society for Pharmacology and Experimental Therapeutics
Year of birth missing (living people)
Living people | Kenneth Tew | Chemistry | 2,239 |
5,094,367 | https://en.wikipedia.org/wiki/European%20Conference%20on%20Object-Oriented%20Programming | The European Conference on Object-Oriented Programming (ECOOP) is an annual conference covering topics on object-oriented programming systems, languages and applications. Like other conferences, ECOOP offers various tracks and many simultaneous sessions, and thus has different meaning to different people.
The first ECOOP was held in Paris, France in 1987. It operates under the auspices of the Association Internationale pour les Technologies Objets, a non-profit organization located in Germany.
ECOOP’s venue changes every year, and the categories of its program vary. Historically ECOOP has combined the presentation of academic papers with comparatively practical experience reports, panels, workshops and tutorials.
ECOOP helped object-oriented programming develop in Europe into what is now mainstream programming, and helped incubate a number of related disciplines, including design patterns, refactoring, aspect-oriented programming, and agile software development.
The winners of the annual AITO Dahl-Nygaard Prize are offered the opportunity to give a keynote presentation at ECOOP.
The sister conference of ECOOP in North America is OOPSLA.
See also
List of computer science conferences
List of computer science conference acronyms
Outline of computer science
References
External links
Computer science conferences
Dahl–Nygaard Prize
Information technology organizations based in Europe
Programming languages conferences | European Conference on Object-Oriented Programming | Technology | 256 |
48,044,280 | https://en.wikipedia.org/wiki/Primal%20Carnage%3A%20Extinction | Primal Carnage: Extinction is an asymmetrical multiplayer game released for Microsoft Windows and PlayStation 4. It features human versus dinosaur combat. Players choose which team to play on, and each team has a set of characters divided into classes. The game is a sequel to the 2012 Windows game Primal Carnage, which was developed by Lukewarm Media. Like its predecessor, it features similar first-person shooter human gameplay and third-person dinosaur gameplay.
The sequel began as a complete rebuild of the original game and was to be released as a free update. However, Circle 5 Studios took over development from Lukewarm Media, and announced that the update would instead be released as a separate game known as Primal Carnage: Extinction, co-developed by Pub Games. It was released for Windows on April 3, 2015. The PlayStation 4 version, developed by Panic Button, was released on October 20, 2015.
A remastered version titled Primal Carnage: Evolution for the PlayStation 4 was announced on September 28, 2023, aiming to bring the console release up to date with the Steam version of Primal Carnage: Extinction.
Gameplay
Primal Carnage: Extinction is an asymmetrical multiplayer game similar to its predecessor, Primal Carnage. The game pits humans against dinosaurs, with team members on both sides divided into character classes. Gameplay is viewed from a third-person perspective when playing as a dinosaur. Playing on the human team switches the game to a first-person shooter. Humans have an array of weapons to use, while dinosaurs roar to activate a number of different abilities, although a waiting period exists in between the use of such abilities. Playable creatures include Carnotaurus, Dilophosaurus, the fictional Novaraptor, Pteranodon, and Tyrannosaurus.
Game modes include Team Deathmatch and Get to the Chopper. In the latter, human players try to reach a helicopter and escape while dinosaur players try to stop them. Other modes include Survival, in which humans face off against a growing number of dinosaurs; and Free Roam, allowing players to explore a level without objectives.
Development and release
The original Primal Carnage was developed by Lukewarm Media and released for Microsoft Windows in 2012. As of 2014, the company was working on a complete rebuild of Primal Carnage, replacing its game code for a less-glitchy gameplay experience. The rebuild, referred to as Primal Carnage 2.0, was initially planned as a free update. Circle 5 Studios took over the Primal Carnage series later in 2014, following disagreements within Lukewarm Media over a planned prequel game known as Primal Carnage: Genesis. The new company consisted of a modding community dedicated to the original game.
Circle 5 announced in October 2014 that the free update would instead be released as a separate game, Primal Carnage: Extinction, which would serve as a sequel. Owners of the original game could purchase the sequel for a discount. In addition to a Windows version, it was also announced that the game would receive a PlayStation 4 port. Primal Carnage: Extinction was co-developed by Circle 5 along with Pub Games, based in Australia. Like its predecessor, the game was created using Unreal Engine 3.
The Windows version officially released on Steam April 3, 2015, after exiting the Early Access phase. The PlayStation 4 version was developed by Panic Button, and was published through the PlayStation Network. It was released in the U.S. on October 20, 2015, followed by a European release on November 24.
A remastered version titled Primal Carnage: Evolution for the PlayStation 4 was announced on September 28, 2023, aiming to bring the console release up to date with the Steam version of Primal Carnage: Extinction.
Reception
The PlayStation 4 version of Primal Carnage: Extinction received "generally unfavorable reviews" according to Metacritic. The Windows version, upon its official launch in April 2015, was heavily criticized for technical problems that were still present after months of Early Access. On TechRaptor, Georgina Young said that, "the concept is awesome," but, "bugs and glitches are rampant," calling the game "virtually unplayable" on MacBook Pro (the game does not officially support Mac) and giving it a 2.5 rating out of 10. The PlayStation 4 version was also criticized for glitches.
Matt Adcock of Push Square wrote that "anything with dinosaurs in it should be more entertaining than this". He praised the dinosaur animations but opined that the environments lacked "pizazz", while stating that the humans looked too cartoonish. Rosario Salatiello of Multiplayer.it was critical of the artificial intelligence and found the use of Unreal Engine 3 to be outdated. Salatiello concluded that the game would have benefitted from more time in development. PlayStation Official Magazine – UK called it a "flimsy-feeling team shooter that squanders an appealing premise in a mess of poor controls and design". Writing for Blast Magazine, Grant Bickelhaupt called it, "a thoroughly good time," however, the balancing was criticized, with the human gameplay described as "punishing." The review gave Extinction 2.8 stars out of 5, with poor ratings in the story and lasting appeal categories.
In a later review for GameGrin, Ryan Davies wrote, "A simple, but fun, FPS that could have been so much better. The dinosaurs are certainly fun, but it won't take long for you to grow tired of the game at large," giving it a score of 6/10. HookedGamers awarded the game a Fun Score of 6.8, praising the dinosaur sound design, saying that it "gives the game an added boost." When describing the game as a whole, the reviewer docked points for clipping issues and having few game modes, noting that, "it does lack that little bit of polish that would make it a great game," but summing up with, "Primal Carnage: Extinction is still worth your time, especially if you love dinosaurs." CanadianOnlineGamers praised the game's dinosaur animations, sound design and music, scoring the game a 70/100 and called Extinction, "A fun dino romp for fans of these giant (and not-so-giant) lizards. It doesn't really bring anything else new to the table in the team deathmatch genre, but what it does bring, it does it well."
References
External links
2015 video games
Asymmetrical multiplayer video games
First-person shooters
Indie games
Multiplayer and single-player video games
Panic Button (company) games
PlayStation 4 games
Unreal Engine 3 games
Video games about dinosaurs
Video games developed in Australia
Video games developed in the United States
Video games set on islands
Windows games | Primal Carnage: Extinction | Physics | 1,360 |
15,561,312 | https://en.wikipedia.org/wiki/12-Crown-4 | 12-Crown-4, also called 1,4,7,10-tetraoxacyclododecane and lithium ionophore V, is a crown ether with the formula C8H16O4. It is a cyclic tetramer of ethylene oxide which is specific for the lithium cation.
Synthesis
12-Crown-4 can be synthesized using a modified Williamson ether synthesis, using LiClO4 as a templating cation:
(CH2OCH2CH2Cl)2 + (CH2OH)2 + 2 NaOH → (CH2CH2O)4 + 2 NaCl + 2 H2O
It also forms from the cyclic oligomerization of ethylene oxide in the presence of gaseous boron trifluoride.
Properties
Like other crown ethers, 12-crown-4 complexes with alkali metal cations. The cavity diameter of 1.2-1.5 Å gives it a high selectivity towards the lithium cation (ionic diameter 1.36 Å)
Its point group is S4. The dipole moment of 12-crown-4 varies with solvent and temperature. At 25 °C, the dipole moment of 12-crown-4 was determined as 2.33 ± 0.03 D in cyclohexane and 2.46 ± 0.01 D in benzene.
References
Sigma-Aldrich Handbook of Fine Chemicals, 2007, page 768.
Sigma-Aldrich Cyclic tetramer of ethylene oxide which is specific for the lithium cation. 98%, 2018
See also
Crown ether
Cyclen, a similar molecule with N atoms (aza groups) instead of O atoms (ethers)
Crown ethers
Twelve-membered rings | 12-Crown-4 | Chemistry | 359 |
78,285,850 | https://en.wikipedia.org/wiki/List%20of%20FreeBSD%20malware | FreeBSD malware includes viruses, Trojans, worms and other types of malware that affect the FreeBSD operating system.
Threats
The following is a partial list of known FreeBSD malware.
Chaos is a malware that infects Windows, Linux and FreeBSD devices
Hive, ransomware that encrypts Linux and FreeBSD systems
Interlock, ransomware targeting Windows and FreeBSD operating systems, appeared at the end of September 2024.
References
FreeBSD
Malware by platform
Lists of software | List of FreeBSD malware | Technology | 110 |
354,320 | https://en.wikipedia.org/wiki/Institution%20of%20Civil%20Engineers | The Institution of Civil Engineers (ICE) is an independent professional association for civil engineers and a charitable body in the United Kingdom. Based in London, ICE has over 92,000 members, of whom three-quarters are located in the UK, while the rest are located in more than 150 other countries. The ICE aims to support the civil engineering profession by offering professional qualification, promoting education, maintaining professional ethics, and liaising with industry, academia and government. Under its commercial arm, it delivers training, recruitment, publishing and contract services. As a professional body, ICE aims to support and promote professional learning (both to students and existing practitioners), managing professional ethics and safeguarding the status of engineers, and representing the interests of the profession in dealings with government, etc. It sets standards for membership of the body; works with industry and academia to progress engineering standards and advises on education and training curricula.
History
The late 18th century and early 19th century saw the founding of many learned societies and professional bodies (for example, the Royal Society and the Law Society). Groups calling themselves civil engineers had been meeting for some years from the late 18th century, notably the Society of Civil Engineers formed in 1771 by John Smeaton (renamed the Smeatonian Society after his death). At that time, formal engineering in Britain was limited to the military engineers of the Corps of Royal Engineers, and in the spirit of self-help prevalent at the time and to provide a focus for the fledgling 'civilian engineers', the Institution of Civil Engineers was founded as the world's first professional engineering body.
The initiative to found the Institution was taken in 1818 by eight young engineers, Henry Robinson Palmer (23), William Maudslay (23), Thomas Maudslay (26), James Jones (28), Charles Collinge (26), John Lethbridge, James Ashwell (19) and Joshua Field (32), who held an inaugural meeting on 2 January 1818, at the Kendal Coffee House in Fleet Street. The institution made little headway until a key step was taken – the appointment of Thomas Telford as the first President of the body. Greatly respected within the profession and blessed with numerous contacts across the industry and in government circles, he was instrumental in drumming up membership and getting a Royal Charter for ICE in 1828. This official recognition helped establish ICE as the pre-eminent organisation for engineers of all disciplines.
Early definitions of a Civil Engineer can be found in the discussions held on 2 January 1818 and in the application for Royal Chartership. In 1818 Palmer said that:
The objects of such institution, as recited in the charter, and reported in The Times, were
After Telford's death in 1834, the organisation moved into premises in Great George Street in the heart of Westminster in 1839, and began to publish learned papers on engineering topics. Its members, notably William Cubitt, were also prominent in the organisation of the Great Exhibition of 1851.
For 29 years ICE provided the forum for engineers practising in all the disciplines recognised today. Mechanical engineer and tool-maker Henry Maudslay was an early member and Joseph Whitworth presented one of the earliest papers – it was not until 1847 that the Institution of Mechanical Engineers was established (with George Stephenson as its first President).
By the end of the 19th century, ICE had introduced examinations for professional engineering qualifications to help ensure and maintain high standards among its members – a role it continues today.
The ICE's Great George Street headquarters, designed by James Miller, was built by John Mowlem & Co and completed in 1911.
Membership and professional qualification
The institution is a membership organisation comprising 95,460 members worldwide (as of 31 December 2022); around three-quarters are located in the United Kingdom. Membership grades include:
Student
Graduate (GMICE)
Associate (AMICE)
Technician (MICE)
Member (MICE)
Fellow (FICE)
ICE is a licensed body of the Engineering Council and can award the Chartered Engineer (CEng), Incorporated Engineer (IEng) and Engineering Technician (EngTech) professional qualifications. Members who are Chartered Engineers can use the protected title Chartered Civil Engineer.
ICE is also licensed by the Society for the Environment to award the Chartered Environmentalist (CEnv) professional qualification.
Publishing
The Institution of Civil Engineers also publishes technical studies covering research and best practice in civil engineering. Under its commercial arm, Thomas Telford Ltd, it delivers training, recruitment, publishing and contract services, such as the NEC Engineering and Construction Contract. All the profits of Thomas Telford Ltd go back to the Institution to further its stated aim of putting civil engineers at the heart of society. The publishing division has existed since 1836 and is today called ICE Publishing. ICE Publishing produces roughly 30 books a year, including the ICE Manuals series, and 30 civil engineering journals, including the ICE Proceedings in nineteen parts, Géotechnique, and the Magazine of Concrete Research. The ICE Science series is now also published by ICE Publishing. ICE Science currently consists of five journals: Nanomaterials and Energy, Emerging Materials Research, Bioinspired, Biomimetic and Nanobiomaterials, Green Materials and Surface Innovations.
Nineteen individual parts now make up the Proceedings, as follows:
Proceedings of the Institution of Civil Engineers: Bridge Engineering
Proceedings of the Institution of Civil Engineers: Civil Engineering
Proceedings of the Institution of Civil Engineers: Construction Materials
Proceedings of the Institution of Civil Engineers: Energy
Proceedings of the Institution of Civil Engineers: Engineering and Computational Mechanics
Proceedings of the Institution of Civil Engineers: Engineering History and Heritage
Proceedings of the Institution of Civil Engineers: Engineering Sustainability
Proceedings of the Institution of Civil Engineers: Forensic Engineering
Proceedings of the Institution of Civil Engineers: Geotechnical Engineering
Proceedings of the Institution of Civil Engineers: Ground Improvement
Proceedings of the Institution of Civil Engineers: Management, Procurement and Law
Proceedings of the Institution of Civil Engineers: Maritime Engineering
Proceedings of the Institution of Civil Engineers: Municipal Engineer
Proceedings of the Institution of Civil Engineers: Smart Infrastructure and Construction
Proceedings of the Institution of Civil Engineers: Structures and Buildings
Proceedings of the Institution of Civil Engineers: Transport
Proceedings of the Institution of Civil Engineers: Urban Design and Planning
Proceedings of the Institution of Civil Engineers: Waste and Resource Management
Proceedings of the Institution of Civil Engineers: Water Management
ICE members, except for students, also receive the New Civil Engineer magazine (published weekly from 1995 to 2017 by Emap, now published monthly by Metropolis International).
Specialist Knowledge Societies
The ICE also administers 15 Specialist Knowledge Societies created at different times to support special interest groups within the civil engineering industry, some of which are British sections of international and/or European bodies. The societies provide continuing professional development and assist in the transfer of knowledge concerning specialist areas of engineering.
The Specialist Knowledge Societies are:
Governance
The institution is governed by the ICE Trustee Board, comprising the President, three Vice Presidents, four members elected from the membership, three ICE Council members, and one nominated member. The President is the public face of the institution and day-to-day management is the responsibility of the Director General.
President
The ICE President is elected annually and the holder for 2024–2025 is Jim Hall.
Each year a number of young engineers have been chosen as President's apprentices. The scheme was started in 2005 during the presidency of Gordon Masterton, who also initiated a President's blog, now the ICE Infrastructure blog. Each incoming President sets out the main theme of his or her year of office in a Presidential Address.
Many of the profession's greatest engineers have served as President of the ICE including:
One of Britain's greatest engineers, Isambard Kingdom Brunel died before he could take up the post (he was vice-president from 1850).
Female civil engineers
The first woman member of ICE was Dorothy Donaldson Buchanan in 1927. The first female Fellows elected were Molly Fergusson (1957), Marie Lindley (1972), Helen Stone (1991) and Joanna Kennedy (1992). In January 2025, 30-year-old Costain engineer Georgia Thompson became the youngest woman to be elected a Fellowship of the ICE.
The three female Presidents (to date) are Jean Venables, who became the 144th holder of the office in 2008, Rachel Skinner, who became President in 2020, and Anusha Shah, the President in 2023.
In January 1969 the Council of the Institution set up a working party to consider the role of women in engineering. Among its conclusions were that 'while women have certainly established their competence throughout the professional engineering field, there is clearly a built-in or unconscious prejudice against them'. The WISE Campaign (Women into Science and Engineering) was launched in 1984; by 1992 3% of the total ICE membership of 79,000 was female, and only 0.8% of chartered civil engineers were women. By 2016 women comprised nearly 12% of total membership, almost 7% of chartered civil engineers and just over 2% of Fellows. In June 2015 a Presidential Commission on diversity was announced. By the start of 2023 women made up 16% of overall membership, with female fellows comprising 6% of the fellowship.
Awards
The Institution makes various awards to recognise the work of its members. In addition to awards for technical papers, reports and competition entries it awards medals for different achievements.
Gold Medal – The Gold Medal is awarded to an individual who has made valuable contributions to civil engineering over many years. This may cover contributions in one or more areas, such as, design, research, development, investigation, construction, management (including project management), education and training.
Garth Watson Medal – The Garth Watson Medal is awarded for dedicated and valuable service to ICE by an ICE Member or member of staff.
Brunel Medal – The Brunel Medal is awarded to teams, individuals or organisations operating within the built environment and recognises excellence in civil engineering.
Edmund Hambly Medal – The Edmund Hambly Medal awarded for creative design in an engineering project that makes a substantial contribution to sustainable development. It is awarded to projects, of any scale, which take into account such factors as full life-cycle effects, including de-commissioning, and show an understanding of the implications of infrastructure impact upon the environment. The medal is awarded in honour of past president Edmund Hambly who was a proponent of sustainable engineering.
International Medal – The International Medal is awarded annually to a civil engineer who has made an outstanding contribution to civil engineering outside the United Kingdom or an engineer who resides outside the United Kingdom.
Warren Medal – The Warren Medal is awarded annually to an ICE member in recognition of valuable services to his or her region.
Telford Medal – Telford Medal is the highest prize that can be awarded by the ICE for a paper.
James Alfred Ewing Medal – The James Alfred Ewing Medal is awarded by the council on the joint nomination of the president and the President of the Royal Society.
James Forrest Medal – The James Forrest Medal was established in honour of James Forrest upon his retirement as secretary in 1896.
Baker Medal – The Baker Medal was established in 1934 to recognise papers that promote or cover developments in engineering practice, or investigation into problems with which Sir Benjamin Baker was specially identified.
Jean Venables Medal – Since 2011, the Institution has awarded a Jean Venables Medal to its best Technician Professional Review candidate.
President's Medal
Emerging Engineer Award
James Rennie Medal – For the best Chartered Professional Review candidate of the year. Named after James Rennie, a civil engineer noted for his devotion to the training of new engineers.
Renée Redfern Hunt Memorial Prize – For the best chartered or member professional review written exercise of the year. Named for an ICE staff member who served as examinations officer from 1945 to 1981.
Tony Chapman Medal – For the best member professional review candidate of the year. Named after an ICE council member who played a key role in the integration of the Board of Incorporated Engineers and Technicians into the institution and in promoting incorporated engineer status.
Chris Binnie Award for Sustainable Water Management
The Bev Waugh Award – Since 2021, for productivity and culture, recognises a leader or individual who has had a positive impact on joint team working
Adrian Long Medal
Student chapters
The ICE has student chapters in several countries including Hong Kong, India, Indonesia, Malaysia, Malta, Pakistan, Poland, Sudan, Trinidad, and United Arab Emirates.
Arms
See also
Chartered Institution of Civil Engineering Surveyors
Construction Industry Council
References
Charles Matthew Norrie (1956). Bridging the Years – a short history of British Civil Engineering. Edward Arnold (Publishers) Ltd.
Garth Watson (1988). The Civils – The story of the Institution of Civil Engineers. Thomas Telford Ltd
Hugh Ferguson and Mike Chrimes (2011). The Civil Engineers – The story of the Institution of Civil Engineers and the People Who Made It. Thomas Telford Ltd
External links
Royal Charter and other documentation for governance of ICE
ICE Royal Charter, By-laws and Regulations,
ICE Publishing website
ICE Science website (archived 11 April 2013)
Civil engineering professional associations
ECUK Licensed Members
Organisations based in the City of Westminster
Organizations established in 1818
1818 establishments in the United Kingdom | Institution of Civil Engineers | Engineering | 2,643 |
42,939,954 | https://en.wikipedia.org/wiki/Gloiocephala%20lutea | Gloiocephala lutea is a species of fungus native to Ecuador. It was described as new to science by Rolf Singer in 1976.
References
Physalacriaceae
Fungi described in 1976
Fungi of Ecuador
Taxa named by Rolf Singer
Fungus species | Gloiocephala lutea | Biology | 52 |
66,516,321 | https://en.wikipedia.org/wiki/Sunacovirus | Sunacovirus is a subgenus of viruses in the genus Alphacoronavirus, consisting of a single species, Suncus murinus coronavirus X74.
References
Virus subgenera
Alphacoronaviruses | Sunacovirus | Biology | 45 |
58,256 | https://en.wikipedia.org/wiki/Wax | Waxes are a diverse class of organic compounds that are lipophilic, malleable solids near ambient temperatures. They include higher alkanes and lipids, typically with melting points above about 40 °C (104 °F), melting to give low viscosity liquids. Waxes are insoluble in water but soluble in nonpolar organic solvents such as hexane, benzene and chloroform. Natural waxes of different types are produced by plants and animals and occur in petroleum.
Chemistry
Waxes are organic compounds that characteristically consist of long aliphatic alkyl chains, although aromatic compounds may also be present. Natural waxes may contain unsaturated bonds and include various functional groups such as fatty acids, primary and secondary alcohols, ketones, aldehydes and fatty acid esters. Synthetic waxes often consist of homologous series of long-chain aliphatic hydrocarbons (alkanes or paraffins) that lack functional groups.
Plant and animal waxes
Waxes are synthesized by many plants and animals. Those of animal origin typically consist of wax esters derived from a variety of fatty acids and carboxylic alcohols. In waxes of plant origin, characteristic mixtures of unesterified hydrocarbons may predominate over esters. The composition depends not only on species, but also on geographic location of the organism.
Animal waxes
The best-known animal wax is beeswax, used in constructing the honeycombs of beehives, but other insects also secrete waxes. A major component of beeswax is myricyl palmitate which is an ester of triacontanol and palmitic acid. Its melting point is . Spermaceti occurs in large amounts in the head oil of the sperm whale. One of its main constituents is cetyl palmitate, another ester of a fatty acid and a fatty alcohol. Lanolin is a wax obtained from wool, consisting of esters of sterols.
Plant waxes
Plants secrete waxes into and on the surface of their cuticles as a way to control evaporation, wettability and hydration. The epicuticular waxes of plants are mixtures of substituted long-chain aliphatic hydrocarbons, containing alkanes, alkyl esters, fatty acids, primary and secondary alcohols, diols, ketones and aldehydes. From the commercial perspective, the most important plant wax is carnauba wax, a hard wax obtained from the Brazilian palm Copernicia prunifera. Containing the ester myricyl cerotate, it has many applications, such as confectionery and other food coatings, car and furniture polish, floss coating, and surfboard wax. Other more specialized vegetable waxes include jojoba oil, candelilla wax and ouricury wax.
Modified plant and animal waxes
Plant and animal based waxes or oils can undergo selective chemical modifications to produce waxes with more desirable properties than are available in the unmodified starting material. This approach has relied on green chemistry approaches including olefin metathesis and enzymatic reactions and can be used to produce waxes from inexpensive starting materials like vegetable oils.
Petroleum derived waxes
Although many natural waxes contain esters, paraffin waxes are hydrocarbons, mixtures of alkanes usually in a homologous series of chain lengths. These materials represent a significant fraction of petroleum. They are refined by vacuum distillation. Paraffin waxes are mixtures of saturated n- and iso- alkanes, naphthenes, and alkyl- and naphthene-substituted aromatic compounds. A typical alkane paraffin wax chemical composition comprises hydrocarbons with the general formula CnH2n+2, such as hentriacontane, C31H64. The degree of branching has an important influence on the properties. Microcrystalline wax is a lesser produced petroleum based wax that contains higher percentage of isoparaffinic (branched) hydrocarbons and naphthenic hydrocarbons.
Millions of tons of paraffin waxes are produced annually. They are used in foods (such as chewing gum and cheese wrapping), in candles and cosmetics, as non-stick and waterproofing coatings and in polishes.
Montan wax
Montan wax is a fossilized wax extracted from coal and lignite. It is very hard, reflecting the high concentration of saturated fatty acids and alcohols. Although dark brown and odorous, they can be purified and bleached to give commercially useful products.
Polyethylene and related derivatives
, about 200 million kilograms of polyethylene waxes were consumed annually.
Polyethylene waxes are manufactured by one of three methods:
The direct polymerization of ethylene, potentially including co-monomers;
The thermal degradation of high molecular weight polyethylene resin;
The recovery of low molecular weight fractions from high molecular weight resin production.
Each production technique generates products with slightly different properties. Key properties of low molecular weight polyethylene waxes are viscosity, density and melt point.
Polyethylene waxes produced by means of degradation or recovery from polyethylene resin streams contain very low molecular weight materials that must be removed to prevent volatilization and potential fire hazards during use. Polyethylene waxes manufactured by this method are usually stripped of low molecular weight fractions to yield a flash point >500 °F (>260 °C). Many polyethylene resin plants produce a low molecular weight stream often referred to as low polymer wax (LPW). LPW is unrefined and contains volatile oligomers, corrosive catalyst and may contain other foreign material and water. Refining of LPW to produce a polyethylene wax involves removal of oligomers and hazardous catalyst. Proper refining of LPW to produce polyethylene wax is especially important when being used in applications requiring FDA or other regulatory certification.
Uses
Waxes are mainly consumed industrially as components of complex formulations, often for coatings. The main use of polyethylene and polypropylene waxes is in the formulation of colourants for plastics. Waxes confer matting effects (i.e., to confer non-glossy finishes) and wear resistance to paints. Polyethylene waxes are incorporated into inks in the form of dispersions to decrease friction. They are employed as release agents, find use as slip agents in furniture, and confer corrosion resistance.
Candles
Waxes such as paraffin wax or beeswax, and hard fats such as tallow are used to make candles, used for lighting and decoration. Another fuel type used in candle manufacturing includes soy. Soy wax is made by the hydrogenation process using soybean oil.
Wood products
Waxes are used as finishes and coatings for wood products. Beeswax is frequently used as a lubricant on drawer slides where wood to wood contact occurs.
Other uses
Sealing wax was used to close important documents in the Middle Ages. Wax tablets were used as writing surfaces. There were different types of wax in the Middle Ages, namely four kinds of wax (Ragusan, Montenegro, Byzantine, and Bulgarian), "ordinary" waxes from Spain, Poland, and Riga, unrefined waxes and colored waxes (red, white, and green). Waxes are used to make waxed paper, impregnating and coating paper and card to waterproof it or make it resistant to staining, or to modify its surface properties. Waxes are also used in shoe polishes, wood polishes, and automotive polishes, as mold release agents in mold making, as a coating for many cheeses, and to waterproof leather and fabric. Wax has been used since antiquity as a temporary, removable model in lost-wax casting of gold, silver and other materials.
Wax with colorful pigments added has been used as a medium in encaustic painting, and is used today in the manufacture of crayons, china markers and colored pencils. Carbon paper, used for making duplicate typewritten documents was coated with carbon black suspended in wax, typically montan wax, but has largely been superseded by photocopiers and computer printers. In another context, lipstick and mascara are blends of various fats and waxes colored with pigments, and both beeswax and lanolin are used in other cosmetics. Ski wax is used in skiing and snowboarding. Also, the sports of surfing and skateboarding often use wax to enhance the performance.
Some waxes are considered food-safe and are used to coat wooden cutting boards and other items that come into contact with food. Beeswax or coloured synthetic wax is used to decorate Easter eggs in Romania, Ukraine, Poland, Lithuania and the Czech Republic. Paraffin wax is used in making chocolate covered sweets.
Wax is also used in wax bullets, which are used as simulation aids, and for wax sculpturing.
Specific examples
Animal waxes
Beeswax – produced by honey bees
Chinese wax – produced by the scale insect Ceroplastes ceriferus
Lanolin (wool wax) – from the sebaceous glands of sheep
Shellac wax – from the lac insect Kerria lacca
Spermaceti – from the head cavities and blubber of the sperm whale
Vegetable waxes
Bayberry wax – from the surface wax of the fruits of the bayberry shrub, Myrica faya
Candelilla wax – from the Mexican shrubs Euphorbia cerifera and Euphorbia antisyphilitica
Carnauba wax – from the leaves of the carnauba palm, Copernicia cerifera
Castor wax – catalytically hydrogenated castor oil
Esparto wax – a byproduct of making paper from esparto grass (Macrochloa tenacissima)
Japan wax – a vegetable triglyceride (not a true wax), from the berries of Rhus and Toxicodendron species
Jojoba oil – a liquid wax ester, from the seed of Simmondsia chinensis.
Ouricury wax – from the Brazilian feather palm, Syagrus coronata.
Rice bran wax – obtained from rice bran (Oryza sativa)
Soy wax – from soybean oil
Tallow tree wax – from the seeds of the tallow tree Triadica sebifera.
Mineral waxes
Ceresin waxes
Montan wax – extracted from lignite and brown coal
Ozocerite – found in lignite beds
Peat waxes
Petroleum waxes
Paraffin wax – made of long-chain alkane hydrocarbons
Microcrystalline wax – with very fine crystalline structure
See also
Slip melting point
Wax acid
Wax argument, or the "ball of wax example", is a thought experiment originally articulated by Renė Descartes.
References
External links
Waxes
Petroleum products
Plant products
Animal products
Lipids
Esters
Soft matter | Wax | Physics,Chemistry,Materials_science | 2,279 |
55,791,579 | https://en.wikipedia.org/wiki/Tianhuang%20Emperor | The Great Emperor of the Curved Array (), also called the Gouchen Emperor and Tianhuang Emperor, is one of the highest sky deities of Taoism. He is one of the Four Sovereigns (; ) and is in charge of heaven, earth, and human and of wars in the human world.
Chinese mythology
The "Curved Array" is a constellation in the Purple Forbidden enclosure, equivalent to the European constellation called Ursa Minor or the Little Dipper. In Taoism, the Great Emperor of Curved Array is the eldest son of Doumu and the brother of the Ziwei Emperor.
History
Emperor Gaozong of Tang was called by the title Emperor Tianhuang as his Posthumous name given by Wu Zetian. Liu Yan was also given the posthumous name.
Constellation
There is a constellation named after the Tianhuang Emperor.
See also
North Star
Myōken
Wufang Shangdi
Four heavenly ministers
Notes
References
External links
道教文化资料库
玉皇大帝
后土皇地祇-地母元君
Taoist deities
Chinese gods
Four heavenly ministers
Chinese constellations
Stellar deities
Polaris | Tianhuang Emperor | Astronomy | 227 |
33,757,250 | https://en.wikipedia.org/wiki/C22H30O4 | The molecular formula C22H30O4 (molar mass: 358.47 g/mol) may refer to:
Bolinaquinone
Canrenoic acid
Cannabichromenic acid
Gestonorone acetate, or gestronol acetate
Tetrahydrocannabinolic acid
Molecular formulas | C22H30O4 | Physics,Chemistry | 69 |
912,923 | https://en.wikipedia.org/wiki/DiSEqC | DiSEqC (; short for Digital Satellite Equipment Control) is a special communication protocol for use between a satellite receiver and a device such as a multi-dish switch or a small dish antenna rotor. DiSEqC was developed by European satellite provider Eutelsat, which now acts as the standards agency for the protocol.
History
Eutelsat apparently developed the system to allow satellite users in Continental Europe to switch between the more popular SES Astra satellites at 19.2° east and Eutelsat's own Hot Bird system at 13° east. As a result, the vast majority of European satellite receivers support DiSEqC 1.0 or higher, with the exception of all set top boxes manufactured under the Sky Digibox name. All supporting receivers have received certification to carry a logo specifying which variation of DiSEqC they support.
Protocol
DiSEqC relies only upon a coaxial cable to transmit both bidirectional data/signals and power. DiSEqC is commonly used to control switches and motors, and is more flexible than 13/18 Volt and 22 kHz tone or ToneBurst/MiniDiSEqC techniques. DiSEqC is also compatible with the actuators used to rotate large C band dishes if used with a DiSEqC positioner. DiSEqC uses a pulsed (tone-burst) 22 kHz sine-wave at 0.65 V (± 0.25 V) peak to peak.
The "Di" (digital) part of the name refers to the digital nature of the signals used by the protocol and does not imply anything about the transmission that the dish is used to receive; DiSEqC may be used with both digital and analogue satellite systems.
Versions and compatibility
A number of versions of DiSEqC exist:
DiSEqC 1.0, which allows switching between up to 4 satellite sources
DiSEqC 1.1, which allows switching between up to 16 sources
DiSEqC 1.2, which allows switching between up to 16 sources, and control of a single axis satellite motor
DiSEqC 2.0, which adds bi-directional communications to DiSEqC 1.0
DiSEqC 2.1, which adds bi-directional communications to DiSEqC 1.1
DiSEqC 2.2, which adds bi-directional communications to DiSEqC 1.2
DiSEqC 3.0, which adds remote management of receivers to DiSEqC 2.2 to enable broadcast house uses
First four variations were standardized by February 1998, prior to general use of digital satellite television. The later versions are backwards compatible with the lower revisions, but the lower revisions are, as might be expected, not forwards compatible with the higher revision numbers. 1.x and 2.x versions are both backwards and forwards compatible.
The terms DiSEqC 1.3 and 2.3 are also often used by manufacturers and retailers to refer to the use of DiSEqC with other protocols. For example, 1.3 usually refers to a receiver which uses USALS in conjunction with the DiSEqC 1.2 protocol. Such terminology has not been authorised by Eutelsat.
The following table shows compatibility between the various DiSEqC versions:
NOTE: a 1.x receiver will not be able to receive communication from a switch or motor. Usually this is not important, as the switch or motor can be controlled by the receiver without problems.
See also
USALS = Universal Satellites Automatic Location System
Monoblock LNB - LNB with builtin DiSEqC switch, used for multiple streams on a single dish
SAT>IP - A modern alternative to DiSEqC which uses an IP-based network to deliver multiple DVB streams
SES
Astra
Eutelsat
Astra 19.2°E
Automatic Tracking Satellite Dish
Starlink Dish
Notes
External links
DiSEqC specifications
DiSEqC specs (retrieved from Internet Archive)
DiSEqC Bus Functional Specification Version 4.2 (bus_spec.pdf contained in DiSEqC-documentation.zip)
Television technology
Satellite television | DiSEqC | Technology | 848 |
30,814,497 | https://en.wikipedia.org/wiki/Confession%3A%20A%20Roman%20Catholic%20App | Confession: A Roman Catholic App was an application (or "app") for the iPhone that was intended to guide members of the Catholic Church through the Sacrament of Penance, also known as confession or reconciliation. According to the developers, the app did not replace confession in person before a priest, but was intended to help Catholics determine what sins they may have committed, as well as guide them through the appropriate prayers in the sacrament. The app is no longer available for download.
The app was published by Little i Apps, and received a from Michael Heintz, PhD, and an from Kevin C. Rhoades, the bishop of Fort Wayne-South Bend. The app was developed by alumni of Franciscan University of Steubenville.
References
Catholic liturgy
Christian software | Confession: A Roman Catholic App | Technology | 156 |
12,269,011 | https://en.wikipedia.org/wiki/Polyphenyl%20ether | Phenyl ether polymers are a class of polymers that contain a phenoxy or a thiophenoxy group as the repeating group in ether linkages. Commercial phenyl ether polymers belong to two chemical classes: polyphenyl ethers (PPEs) and polyphenylene oxides (PPOs). The phenoxy groups in the former class of polymers do not contain any substituents whereas those in the latter class contain 2 to 4 alkyl groups on the phenyl ring. The structure of an oxygen-containing PPE is provided in Figure 1 and that of a 2, 6-xylenol derived PPO is shown in Figure 2. Either class can have the oxygen atoms attached at various positions around the rings.
Structure and synthesis
The proper name for a phenyl ether polymer is poly(phenyl ether) or polyphenyl polyether, but the name polyphenyl ether is widely accepted. Polyphenyl ethers (PPEs) are obtained by repeated application of the Ullmann Ether Synthesis: reaction of an alkali-metal phenate with a halogenated benzene catalyzed by copper.
PPEs of up to 6 phenyl rings, both oxy and thio ethers, are commercially available. See Table 1. They are characterized by indicating the substitution pattern of each ring, followed by the number of phenyl rings and the number of ether linkages. Thus, the structure in Figure 1 with n equal to 1 is identified as pmp5P4E, indicating para, meta, para substitution of the three middle rings, a total of 5 rings, and 4 ether linkages. Meta substitution of the aryl rings in these materials is most common and often desired. Longer chain analogues with up to 10 benzene rings are also known.
The simplest member of the phenyl ether family is diphenyl ether (DPE), also called diphenyl oxide, the structure of which is provided in Figure 4. Low molecular weight polyphenyl ethers and thioethers are used in a variety of applications, and include high-vacuum devices, optics, electronics, and in high-temperature and radiation-resistant fluids and greases. Figure 5 shows the structure of the sulfur analogue of 3-R polyphenyl ether shown in Figure 3.
Physical properties
Typical physical properties of polyphenyl ethers are provided in Table 2. Physical properties of a particular PPE depend upon the number of aromatic rings, their substitution pattern, and whether it is an ether or a thioether. In the case of products of mixed structures, properties are hard to predict from only the structural features; hence, they must be determined via measurement.
The important attributes of PPEs include their thermal and oxidative stability and stability in the presence of ionizing radiation. PPEs have the disadvantage of having somewhat high pour points. For example, PPEs that contain two and three benzene rings are actually solids at room temperatures. The melting points of the ordinarily solid PPEs are lowered if they contain more m-phenylene rings, alkyl groups, or are mixtures of isomers. PPEs that contain only o- and p-substituted rings have the highest melting points.
Thermo-oxidative stability
PPEs have excellent high temperature properties and good oxidation stability. With respect to volatilities, p-derivatives have the lowest volatilities, and the o-derivatives have the highest volatilities. The opposite is true for flash points and fire points. Spontaneous ignition temperatures of polyphenyl ethers lie between , alkyl substitution reduces this value by ~. PPEs are compatible with most metals and elastomers that are commonly used in high-temperature applications. They typically swell common seal materials.
Oxidation stability of un-substituted PPEs is quite good, partly because they lack easily oxidizable carbon-hydrogen bonds. Thermal decomposition temperature, as measured by the isoteniscope procedure, is between .
Radiation stability
Ionizing radiation affects all organic compounds, causing a change in their properties because radiation disrupts covalent bonds that are most prevalent in organic compounds. One result of ionization is that the organic molecules disproportionate to form smaller hydrocarbon molecules as well as larger hydrocarbons molecules. This is reflected by increased evaporation loss, lowering of the flash and fire points, and increased viscosity. Other chemical reactions caused by radiation include oxidation and isomerization. The former leads to increased acidity, corrosivity, and coke formation; the latter causes a change in viscosity and volatility.
PPEs have extremely high radiation resistance. Of all classes of synthetic lubricants (with the possible exception of perfluoropolyethers) the polyphenyl ethers are the most radiation resistant. Excellent radiation stability of PPEs can be ascribed to the limited number of ionizable carbon-carbon and carbon-hydrogen bonds. In one study, the performance of PPE under the influence of 1 ergs/gram of radiation at was compared with synthetic ester, synthetic hydrocarbon, and silicone fluids. PPE showed a viscosity increase of only 35%, while all other fluids showed a viscosity increase of 1700% and gelled. Further tests have shown PPEs to be resistant to gamma and associated neutron radiation dosages of 1 erg/g at temperatures up to .
Surface tension
PPEs have high surface tension; hence these fluids have a lower tendency to wet metal surfaces. The surface tension of the commercially available 5R4E is 49.9 dynes/cm, one of the highest in pure organic liquids. This property is useful in applications where migration of the lubricant into the surrounding environment must be avoided.
Applications
While originally PPEs were developed for use in extreme environments that were experienced in aerospace applications, they are now used in other applications requiring low volatility and excellent thermo-oxidative and ionizing radiation stability. Such applications include use as diffusion pump fluids; high vacuum fluids; and in formulating jet engine lubricants, high-temperature hydraulic lubricants and greases, and heat transfer fluids. In addition, because of excellent optical properties these fluids have found use in optical devices.
Ultra-high-vacuum fluids
Vacuum pumps are devices that remove gases from an enclosed space to greatly reduce pressure. Oil diffusion pumps in combination with a fore pump are amongst the most popular. Diffusion pumps use a high boiling liquid of low vapor pressure to create a high-speed jet that strikes the gaseous molecules in the system to be evacuated and direct them into space that is being evacuated by the fore pump. A good diffusion fluid must therefore reflect low vapor pressure, high flash point, high thermal and oxidative stability and chemical resistance. If the diffusion pump is operating in the proximity of ionizing radiation source, good radiation stability is also desired.
Data presented in Table 3 demonstrates polyphenyl ether to be superior to other fluids that are commonly used in diffusion pumps. PPEs help achieve the highest vacuum of 4 torr at 25 °C. Such high vacuums are necessary in equipment such as electron microscopes, mass spectrometers and that used for various surface physics studies. Vacuum pumps are also used in the production of electric lamps, vacuum tubes, and cathode ray tubes (CRTs), semiconductor processing, and vacuum engineering.
Electronic connector lubricants
5R4E PPE has a surface tension of 49.9 dynes/cm, which is amongst the highest in pure organic liquids. Because of this, this PPE and the other PPEs do not effectively wet metal surfaces. This property is useful when migration of a lubricant from one part of the equipment to another part must be avoided, such as in certain electronic devices. A thin film of polyphenyl ether on a surface is not a thin contiguous film as one would envision, but rather comprises tiny droplets. This PPE property tends to keep the film stationary, or at least to cause it to remain in the area where the lubrication is needed, rather than migrating away by spreading and forming a new surface. As a result, contamination of other components and equipment, which do not require a lubricant, is avoided. The high surface tension of PPEs, therefore, makes them useful in lubricating electronic contacts.
Polyphenyl ether lubricants have a 30-year history of commercial service for connectors with precious and base metal contacts in telecom, automotive, aerospace, instrumentation and general-purpose applications. In addition to maintaining the current flow and providing long-term lubrication, PPEs offer protection to connectors against aggressive acidic and oxidative environments. By providing a protective surface film, polyphenyl ethers not only protect connectors against corrosion but also against vibration-related wear and abrasion that leads to fretting wear. The devices that benefit from the specialized properties of PPEs include cell phones, printers, and a variety of other electronic appliances. The protection lasts for decades or for the life of the equipment.
Optics
Polyphenyl ethers (PPEs) possess good optical clarity, a high refractive index, and other beneficial optical properties. Because of these, PPEs have the ability to meet the rigorous performance demands of signal processing in advanced photonics systems. Optical clarity of PPEs resembles that of the other optical polymers, that is, they have refractive indices of between 1.5 and 1.7 and provide good propagation of light between approximately 400 nm and 1700 nm. Close refractive index (RI) matching between materials is important for proper propagation of light through them. Because of the ease of RI matching, PPEs are used in many optical devices as optical fluids. Extreme resistance to ionizing radiation gives PPEs an added advantage in the manufacture of solar cells and solid-state UV/blue emitters and telecommunication equipment made from high-index glasses and semiconductors.
High-temperature and radiation-resistant lubricants
PPEs, being of excellent thermo-oxidative stability and radiation resistance, have found extensive use in high temperature applications that also require radiation resistance. In addition, PPEs demonstrate better wear control and load-carrying ability than mineral oils, especially when used in bearings.
PPEs were developed for use in jet engines that involved high speed-related frictional temperatures of as high as . While the use of PPEs in lubricating jet engines has somewhat subsided due to their higher cost, they are still used in some aerospace applications. PPEs are also used as base fluids for radiation-resistant greases used in nuclear power plant mechanisms. PPEs and their derivatives have also found use as vapor phase lubricants in gas turbines and custom bearings, and wherever extreme environmental conditions exist. Vapor phase lubrication is achieved by heating the liquid lubricant above its boiling point. The resultant vapors are then transported to the hot bearing surface. If the temperatures of the bearing surface are kept below the lubricant’s boiling point, the vapors re-condense to provide liquid lubrication.
Polyphenyl ether technology can also provide superior fire safety and fatigue life, depending on the specific bearing design. In this application, PPEs have the advantage of providing lubrication both as a liquid at low temperatures and as a vapor at temperatures above . Due to the low volatility and excellent high-temperature thermo-oxidative stability, PPEs have also found use as a lubricant for chains used in and around kilns, metal fabrication plants, and glass molding and manufacturing equipment. In these high-temperature applications, PPEs do not form any sludge and hard deposits. The low soft-carbon residue that is left behind is removed easily by wiping. PPEs' low volatility, low flammability, and good thermodynamic properties make them ideally suited for use as heat transfer fluids and in heat sink applications as well.
Polyphenylene oxides (PPOs)
These polymers are made through oxidative coupling of substituted phenol in the presence of oxygen and copper and amine containing catalysts, such as cuprous bromide and pyridine. See Figure 2 for the PPO structure. PPO polymers can be classified as plastic resins. They and their composites with polystyrene, glass, and nylon are used as high-strength, moisture-resistant engineering plastics in a number of industries, including computer, telecommunication, and automotive parts. PPOs are marketed by SABIC Innovative Plastics under the trademarked name of Noryl.
References
Plastics
Polyethers
Lubricants | Polyphenyl ether | Physics | 2,632 |
77,621,141 | https://en.wikipedia.org/wiki/BD%2B29%205007 | BD+29 5007 is a K-type star, located 77 light-years in the constellation Pegasus. It has a large-separation companion that was identified in 2016. The pair was identified to be a possible member of the million years old Argus association (see IC 2391), though this is disputed.
Properties
The star has a mass of , a radius of and a temperature of Kelvin. It has a spectral type of K5V.
Companion
The companion is 2MASS J23512200+3010540 (short: 2MASS J2351+3010) that was discovered in 2010 and first identified as a possibly young low-mass object in 2014 by the BANYAN II survey. The authors find a L5.5 dwarf with red near-infrared colors. If it is a member of Argus, it should have a mass of 9−11 , according to the authors. However, the BANYAN VII survey in 2015 revised the status of 2MASS J2351+3010 to a field object, i.e. not a member of any stellar cluster or association. This is also suggested by measured surface gravity of 2MASS J2351+3010, consistent with that of a field object. This would mean that the companion is too massive to have a planetary mass (i.e. its mass is larger than ).
In 2016 it was identified as a possible companion to BD+29 5007. In 2024 it was again identified as an Argus member with a mass of . The same authors calculate that this system has a probability of 1.71% to be a false-positive match. The companion is separated by 935 arcseconds, which translates into 22,100 astronomical units at this distance. This high separation is larger than the 12,000 AU projected separation of Gliese 900 b, currently the planetary-mass object with the longest known orbit, and is similar to brown dwarfs such as UCAC4 328-061594.
See also
List of exoplanet extremes
References
K-type main-sequence stars
L-type brown dwarfs
Binary stars
Pegasus (constellation)
Durchmusterung objects
117559 | BD+29 5007 | Astronomy | 448 |
1,578,810 | https://en.wikipedia.org/wiki/Photogram | A photogram is a photographic image made without a camera by placing objects directly onto the surface of a light-sensitive material such as photographic paper and then exposing it to light.
The usual result is a negative shadow image that shows variations in tone that depends upon the transparency of the objects used. Areas of the paper that have received no light appear white; those exposed for a shorter time or through transparent or semi-transparent objects appear grey, while fully-exposed areas are black in the final print.
The technique is sometimes called cameraless photography. It was used by Man Ray in his rayographs. Other artists who have experimented with the technique include László Moholy-Nagy, Christian Schad (who called them "Schadographs"), Imogen Cunningham and Pablo Picasso.
Variations of the technique have also been used for scientific purposes, in shadowgraph studies of flow in transparent media and in high-speed Schlieren photography, and in the medical X-ray.
The term photogram comes from the combining form () of Ancient Greek (, "light"), and Ancient Greek suffix (), from (γράμμα, "written character, letter, that which is drawn"), from (, "to scratch, to scrape, to graze").
History
Prehistory
The phenomenon of the shadow has long aroused human curiosity and inspired artistic representation, as recorded by Pliny the Elder, and various forms of shadow play since the 1st millennium BCE. The photogram, in essence, is a means by which the fall of light and shade on a surface may be automatically captured and preserved. To do so required a substance that would react to light. From the 17th century, photochemical reactions were progressively observed or discovered in salts of silver, iron, uranium and chromium. In 1725, Johann Heinrich Schulze was the first to demonstrate a temporary photographic effect in silver salts, confirmed by Carl Wilhhelm Scheele in 1777, who found that violet light caused the greatest reaction in silver chloride. Humphry Davy and Thomas Wedgewood reported that they had produced temporary images from placing stencils/light sources on photo-sensitized materials, but had no means of fixing (making permanent) the images.
Nineteenth century
The first photographic negatives made were photograms (though the first permanent photograph was made with a camera by Nicéphore Niépce). William Henry Fox Talbot called these photogenic drawings, which he made by placing leaves or pieces of lace onto sensitized paper, then left them outdoors on a sunny day to expose. This produced a dark background with a white silhouette of the placed object.
In 1843, Anna Atkins produced a book titled British Algae: Cyanotype Impressions in installments; the first to be illustrated with photographs. The images were all photograms of botanical specimens, mostly seaweeds, which she made using Sir John Herschel's cyanotype process, which yields blue images.
Modernism
Photograms and artists who worked with(in)the medium have participated in/contributed to several studied/demarcated modern art movements, such as Dada and Constructivism, and in architecture in the formalist dissections of the Bauhaus.
The relative ease of access (not needing a camera and, depending on the medium, a darkroom) and perhaps the interactive to the point of feeling incidental nature of creating photograms enabled experiments in abstraction by Christian Schad as early as 1918, Man Ray in 1921, and Moholy-Nagy in 1922, through dematerialisation and distortion, merging and interpenetration of forms, and flattening of perspective.
Christian Schad's 'schadographs'
In 1918, Christian Schad's experiments with the photogram were inspired by Dada, creating photograms from random arrangements of discarded objects he had collected such as torn tickets, receipts and rags. Some argue that he was the first to make this an art form, preceding Man Ray and László Moholy-Nagy by at least a year or two, and one was published in March 1920 in the magazine Dadaphone by Tristan Tzara, who dubbed them 'Schadographs'.
Man Ray's 'rayographs'
Photograms were used in the 20th century by a number of photographers, particularly Man Ray, whose "rayographs" were also given the name by Dada leader Tzara. Ray described his (re-)discovery of the process in his 1963 autobiography;
In his photograms, Man Ray made combinations of objects—a comb, a spiral of cut paper, an architect's French curve—some recognisable, others transformed, typifying Dada's rejection of 'style', emphasising chance and abstraction. He published a selection of these rayographs as Champs délicieux in December 1922, with an introduction by Tzara. His 1923 film Le Retour à la Raison ('Return to Reason') adapts rayograph technique to moving images.
Other 20th century artists
In the 1930s, artists including Theodore Roszak, and Piet Zwart also made photograms. Luigi Veronesi combined the photographic image with oil on canvas in large-scale colour images by preparing a light-sensitive canvas on which he placed objects in the dark for exposure and then fixing. The shapes became the matrix for an abstract painting to which he applied colour and added drawn geometric lines to enhance the dynamics, exhibiting them at the Galerie L'Equipe in Paris in 1938–1939. Bronislaw Schlabs, Julien Coulommier, Andrzej Pawlowski and Beksinki were photogram artists in the 1940s and 1950s; Heinz Hajek-Halke and Kurt Wendlandt with their light graphics in the 1960s; Lina Kolarova, Rene Mächler, Dennis Oppenheim, and Andreas Mulas in the 1970s; and Tomy Ceballos, Kare Magnole, Andreas Müller-Pohle, and Floris M. Neusüss in the 1980s.
Contemporary
Established contemporary artists who are widely known for using photograms are Adam Fuss, Susan Derges, Christian Marclay, and Karen Amy Finkel Fishof, who has digitized and minted her photograms as NFTs. Younger artists worldwide continue to value the materiality of the technique in the digital age. Mauritian artist Audrey Albert uses cameraless techniques to connect material culture to contemporary identities of Chagos Islanders.
Procedure
The customary approach to making a photogram is to use a darkroom and enlarger and to proceed as one would in making a conventional print, but instead of using a negative, to arrange objects on top of a piece of photographic paper for exposure under the enlarger lamp which can be controlled with the timer switch and aperture controls. That will give a result similar to the image at left; since the enlarger emits light through a lens aperture, the shadows of even tall objects like the beaker standing upright on the paper will stay sharp; the more so at smaller apertures.
The print is then processed, washed, and dried.
At this stage the image will look similar to a negative, in which shadows are white. A contact-print onto a fresh sheet of photographic paper will reverse the tones if a more naturalistic result is desired, which may be facilitated by making the initial print on film.
However, there are other arrangements for making photograms, and devising them is part of the creative process. Alice Lex-Nerlinger used the conventional darkroom approach in making photograms as a variation on her airbrushed stencil paintings, since lighting penetrating the translucent paper from which she cut her pictures would print a variegated texture she could not otherwise obtain.
Another component of this medium is the light source, or sources, used. A broad source of light will cast nuances of shadow; umbra, penumbra and antumbra, as shown in the accompanying diagram.
Photograms may be made outdoors providing the photographic emulsion is sufficiently slow to permit it. Direct sunlight is a point-source of light (like that of an enlarger), while cloudy conditions give soft-edged shadows around three-dimensional objects placed on the photosensitive surface. The cyanotype process ('blueprints') such as that used by Anna Atkins (see above), is slow and insensitive enough that fixing an impression on paper, fabric, timber or other supports can be done in subdued light indoors. Exposure outdoors may take many minutes depending on conditions, and its progress may be gauged by inspection as the coating darkens. 'Printing-out paper' or other daylight-printing material such as gum bichromate may also enable outdoor exposure. Christian Schad simply placed tram tickets and other ephemera under glass on printing-out paper on his window-sill for exposure.
Conventional monochrome or colour, or direct-positive photographic material may be exposed in the dark using a flash unit, as does Adam Fuss for his photograms that capture the movement of a crawling baby, or an eel in shallow water. Susan Derges captures water currents in the same way, while Harry Nankin has immersed large sheets of monochrome photographic paper at the edge of the sea and mounted a flash on a specially-constructed oversize tripod above it to capture the action of waves and seaweeds washing over the paper surface. In 1986, Floris Neusüss began his Nachtbilder ('nocturnal pictures'), exposed by lightning.
Other variations include using the light of a television screen or computer display, pressing the photosensitive paper to the surface. Multiple light sources or exposing with multiple flashes of light, or moving the light source during exposure, projecting shadows from a low-angle light, and using successive exposures while moving, removing or adding shadows, will produce multiple shadows of varying quality.
List of notable photographers using photograms
Markus Amm
Anna Atkins
Walead Beshty
Christopher Bucklow
Kate Cordsen
Olive Cotton
Susan Derges
Michael Flomen
Adam Fuss
Heinz Hajek-Halke
Raoul Hausmann
John Herschel
Edmund Kesting
Len Lye
László Moholy-Nagy
Alice Lex-Nerlinger
Floris Michael Neusüss
Anne Noble
Andrzej Pawlowski
Pablo Picasso
Man Ray
Alexander Rodchenko
Theodore Roszak
Christian Schad
Greg Stimac
August Strindberg
Jean-Pierre Sudre
Kunié Sugiura
Henry Fox Talbot
Mikhail Tarkhanov
Elsa Thiemann
Luigi Veronesi
Kurt Wendlandt
Nancy Wilson-Pajic
Keith Carter
See also
Luminogram – photogram using light only with no objects
Schlieren photography – light is focused with a lens or mirror and a knife edge is placed at the focal point to create graduated shadows of flow and waves in otherwise transparent media like air, water, or glass
Shadowgraph – like Schlieren photography, but without the knife-edge, reveals non-uniformities in transparent media
Chemigram – camera-less technique using photographic (and other) chemistry with light
Neues Sehen – László Moholy-Nagy's 'New Vision' photography movement
Cliché verre – semiphotographic printmaking technique using a negative created by drawing
Drawn-on-film animation – cliche-verre technique in which movie film emulsion is scratched and drawn frame-by-frame
Cyanotype – photographic printing process that produces a cyan-blue print
Kirlian photography – photographic techniques used to capture the phenomenon of electrical coronal discharges
References
Photographic techniques
History of photography
Light
Shadows | Photogram | Physics | 2,406 |
56,464,414 | https://en.wikipedia.org/wiki/Bornyl%20acetate | Bornyl acetate is a chemical compound. Its molecular formula is C12H20O2 and its molecular weight is 196.29 g/mol. It is the acetate ester of borneol. It is used as a food additive, flavouring agent, and odour agent.
It is a component of the essential oil from pine needles (from the family Pinaceae) and primarily responsible for its odor.
References
Acetate esters
Terpenes and terpenoids
Food additives | Bornyl acetate | Chemistry | 103 |
407,642 | https://en.wikipedia.org/wiki/Pongidae | Pongidae , or the pongids is an obsolete primate taxon containing chimpanzees, gorillas and orangutans. By this definition pongids were also called "great apes". This taxon is not used today but is of historical significance. The great apes are currently classified as Hominidae. This entry addresses the old usage of pongid.
The words "Pongidae" and "pongids" are sometimes used informally for the primate taxon containing orangutans and their extinct fossil relations. For this usage the currently most widely accepted name is Ponginae (or informally Asian hominids or pongines), the orangutan subfamily of the Hominidae or hominids. In current hominid taxonomy there is no “pongid” taxon. The orangutan taxon is now known to be paraphyletic to other (African) hominids. The orangutans are the only surviving species of the subfamily Ponginae, which genetically diverged from the other hominids (gorillas, chimpanzees and humans) between 19.3 and 15.7 million years ago. The subfamilies split somewhat later. The corresponding crown group for this taxon is Hominidae.
Distinction of great apes (formerly pongids) to hominins
Skull
The great ape (formerly pongid) skull contains the following features that are absent or less pronounced in humans:
a sulcus behind the brow ridges
prognathism
a protruding occipital region
large, bony eye sockets
a large nasal opening
constriction just behind the orbital region
stout facial bones
a diastema
a simian shelf
a larger, well pronounced brow ridge
Adaptations for locomotion
The following great ape (formerly Pongid) adaptations are for arboreal and knuckle walking locomotion and are not found in humans:
Similarity to hominins
The australopithecines show intermediate character states between great apes (formerly pongids) and humans, with Homo erectus (formerly Pithecanthropus) intermediate between australopithecines and humans. Members of the genus Homo share many key features with anatomically modern man.
See also
Anoiapithecus
Chororapithecus
History of hominoid taxonomy
Pierolapithecus
Samburupithecus
References
Science and faith: The hominid fossil record
External links
Pongidae - the Great Apes Family
Brain endocast asymmetry in pongids
Apes
Primate families
Obsolete primate taxa
Paraphyletic groups | Pongidae | Biology | 532 |
63,623,321 | https://en.wikipedia.org/wiki/Unit%20%28Norway%29 | Unit, which labels itself as the Norwegian "directorate for ICT and joint services in higher education and research", is the within the Ministry of Education and Research which provides governance of and access to shared information and communications technology (ICT) services. Unit was created on January 1, 2018, following a merger of BIBSYS, and parts of Uninett.
See also
National Library of Norway
Open access in Norway
Project DEAL
References
Further reading
External links
2018 establishments in Norway
Government agencies established in 2018
Organisations based in Trondheim
Government agencies of Norway
Information and communications technology | Unit (Norway) | Technology | 112 |
13,908,785 | https://en.wikipedia.org/wiki/Law%20of%20continuity | The law of continuity is a heuristic principle introduced by Gottfried Leibniz based on earlier work by Nicholas of Cusa and Johannes Kepler. It is the principle that "whatever succeeds for the finite, also succeeds for the infinite". Kepler used the law of continuity to calculate the area of the circle by representing it as an infinite-sided polygon with infinitesimal sides, and adding the areas of infinitely many triangles with infinitesimal bases. Leibniz used the principle to extend concepts such as arithmetic operations from ordinary numbers to infinitesimals, laying the groundwork for infinitesimal calculus. The transfer principle provides a mathematical implementation of the law of continuity in the context of the hyperreal numbers.
A related law of continuity concerning intersection numbers in geometry was promoted by Jean-Victor Poncelet in his "Traité des propriétés projectives des figures".
Leibniz's formulation
Leibniz expressed the law in the following terms in 1701:
In any supposed continuous transition, ending in any terminus, it is permissible to institute a general reasoning, in which the final terminus may also be included (Cum Prodiisset).
In a 1702 letter to French mathematician Pierre Varignon subtitled “Justification of the Infinitesimal Calculus by that of Ordinary Algebra," Leibniz adequately summed up the true meaning of his law, stating that "the rules of the finite are found to succeed in the infinite."
The law of continuity became important to Leibniz's justification and conceptualization of the infinitesimal calculus.
See also
Transcendental law of homogeneity
References
Nonstandard analysis
Gottfried Wilhelm Leibniz
Infinity
History of calculus
Mathematics of infinitesimals
Metaphysical principles | Law of continuity | Mathematics | 350 |
15,369,988 | https://en.wikipedia.org/wiki/Axiomatic%20product%20development%20lifecycle | Axiomatic product sevelopment lifecycle (APDL), also known as transdisciplinary system development lifecycle (TSDL) and transdisciplinary product development lifecycle (TPDL), is a systems engineering product development model
proposed by Bulent Gumus that extends the Axiomatic design (AD) method. APDL covers the whole product lifecycle including early factors that affect the entire cycle such as development testing, input constraints and system components.
APDL provides an iterative and incremental way for a team of transdisciplinary members to approach holistic product development. A practical outcome includes capturing and managing product design knowledge. The APDL model addresses some weak patterns experienced in previous development models regarding quality of the design, requirements management, change management, project management, and communication between stakeholders. Practicing APDL may reduce development time and project cost.
Overview
APDL adds the Test domain and four new characteristics to Axiomatic design (AD): Input Constraints in the Functional Domain; Systems Components in the Physical Domain; Process Variables tied to System Components instead of Design Parameters; and Customer Needs mapped to Functional Requirements and Input Constraints.
APDL proposes a V-shaped process to develop the Design Parameters and System Components (detailed design). Start top-down with Process Variables (PV) and Component Test Cases (CTC) to complete the PV, CTC, and Functional Test Cases (FTC); And after build, test the product with a bottom-up approach.
APDL Domains
Customer domain
Customer Needs (CN) are elements that the customer seeks in a product or system.
Functional domain
Functional Requirements (FR) completely characterize the minimum performance to be met by the design solution, product etc. FR are documented in requirement specifications (RS).
Input Constraints (IC) are included in the functional domain along with the FR. IC are specific to overall design goals and are imposed externally by CN, product users or conditions of use, such as regulations. IC are derived from CN and then revised based on other constraints that the product has to comply with but not mentioned in the Customer Domain.
Physical domain
The Design Parameters (DP) are the elements of the design solution in the physical domain that are chosen to satisfy the specified FRs. DPs can be conceptual design solutions, subsystems, components, or component attributes.
System Components (SC) provide a categorical design solution in the DP, where the categories represent physical parts in the Physical Domain. The SC hierarchy represents the physical system architecture or product tree. The method for categorizing varies. Eppinger portrays general categories as system, subsystem, and component Eppinger (2001). NASA uses system, segment, element, subsystem, assembly, subassembly, and part (NASA, 1995).
SC makes it possible to perform Design Structure Matrixes (DSM), change management, component-based cost management and impact analysis, and provides framework for capturing structural information and requirement traceability.
Process domain
Process Variables (PV) identify and describe the controls and processes to produce SC.
Test domain
A functional test consists of a set of Functional Test Cases (FTC). FTC are system tests used to verify that FR are satisfied by the system. Black-box testing is the software analog to FTC. At the end of the system development, a functional test verifies that the requirements of the system are met.
Component Test Cases (CTC) are a physical analog to White-box testing. CTC verify that components satisfy the allocated FRs and ICs. Each system component is tested before it is integrated into the system to make sure that the requirements and constraints allocated to that component are all satisfied.
See also
Systems development life-cycle
New product development
Product lifecycle management
Engineering design process
Design–build
Integrated project delivery
References
Further reading
B. Gumus, A. Ertas, D. Tate and I. Cicek, Transdisciplinary Product Development Lifecycle, Journal of Engineering Design, 19(03), pp. 185–200, June 2008. .
B. Gumus, A. Ertas, and D. TATE, "Transdisciplinary Product Development Lifecycle Framework And Its Application To An Avionics System", Integrated Design and Process Technology Conference, June 2006.
B. Gumus and A. Ertas, "Requirements Management and Axiomatic Design", Journal of Integrated Design and Process Science, Vol. 8 Number 4, pp. 19–31, Dec 2004.
Suh, Complexity: Theory and Applications, Oxford University Press, 2005,
Suh, Axiomatic Design: Advances and Applications, Oxford University Press, 2001,
Engineering concepts
Product development
Quality management
Systems engineering | Axiomatic product development lifecycle | Engineering | 969 |
55,959,933 | https://en.wikipedia.org/wiki/Bladelet%20%28impeller%29 | Used in centrifugal impeller terminology, bladelets are the more 'Metro' version of the common engineering description of splitters (shorter blades that do not extend into the centre of the impeller). The term is thought to have originated among the middle-upper-level management at medical device engineering companies, near the turn of the millennium.
Marine propulsion | Bladelet (impeller) | Engineering | 75 |
73,410,057 | https://en.wikipedia.org/wiki/Palladium%20hexafluoride | Palladium hexafluoride is an inorganic chemical compound of palladium metal and fluorine with the chemical formula . It is reported to be a still hypothetical compound. This is one of many palladium fluorides.
Synthesis
Fluorination of palladium powder with atomic fluoride at 900–1700 Pa.
Physical properties
Palladium hexafluoride is predicted to be stable. The compound is reported to form dark red solid that decomposes to . Palladium hexafluoride is a very powerful oxidizing agent.
References
Palladium compounds
Fluorides
Hexafluorides
Metal halides
Oxidizing agents
Hypothetical chemical compounds
Theoretical chemistry | Palladium hexafluoride | Chemistry | 145 |
3,530,603 | https://en.wikipedia.org/wiki/Lethality | Lethality (also called deadliness or perniciousness) is how capable something is of causing death. Most often it is used when referring to diseases, chemical weapons, biological weapons, or their toxic chemical components. The use of this term denotes the ability of these weapons to kill, but also the possibility that they may not kill. Reasons for the lethality of a weapon to be inconsistent, or expressed by percentage, can be as varied as minimized exposure to the weapon, previous exposure to the weapon minimizing susceptibility, degradation of the weapon over time and/or distance, and incorrect deployment of a multi-component weapon.
This term can also refer to the after-effects of weapon use, such as nuclear fallout, which has highest lethality nearest the deployment site, and in proportion to the subject's size and nature; e.g. a child or small animal.
Lethality can also refer to the after-effects of a major chemical or oil/gas process loss of containment, causing fire, explosion, or a toxic cloud. Lethality curves can be developed in process safety to assess and describe mortality patterns around the accident location. The impact is typically greatest closest to the event site and lessens to the outskirts of the impact zone. Blast overpressure, thermal radiation, toxicity and location affect the degree of lethality.
Lethality is also a term used by microbiologists and food scientists as a measure of the ability of a process to destroy bacteria. Lethality may be determined by enumeration of survivors after incremental exposures.
See also
Lethal dose, an indication of the lethal toxicity of a given substance or type of radiation
Stopping power
Toxicity
References
Military doctrines
Military terminology
Epidemiology
Rates
Death | Lethality | Environmental_science | 354 |
28,132,052 | https://en.wikipedia.org/wiki/Potamal | Potamal is a technical geographical term of limnology and hydrology of the lower stretches of a stream or river. It describes the overall habitat, stability and ecology of the biomass.
Further reading
The FILION ., Pienitz R., "Physical Geography: Natural environments" General Documents, Winter 2007, Université Laval, pp 30–31.
Geography terminology
Hydrology | Potamal | Chemistry,Engineering,Environmental_science | 76 |
7,407,388 | https://en.wikipedia.org/wiki/Fatuha%20train%20crash | The Fatuha train crash was a rail transport accident that occurred on 4 April 1998, in India. Removal of fishplates led to the packed Howrah-Danapur Express jumping tracks, killing at least 11 passengers and injuring more than 50 near Fatuha Station in Fatuha city on the Eastern Railway's Danapur division. In all, nine bogies derailed disrupting traffic.
Local citizens assisted the injured at the scene until authorities arrived. The injured were rushed to the PMC hospital, Nalanda Medical College hospital and a hospital in Patna, about from the accident site. The remaining passengers were taken to Patna by a special train. Eleven passengers died at the scene, and one succumbed to his injuries at Patna Medical College hospital.
"Prima facie, the cause of the accident is removal of fish plates on the right side of the tracks," railway officials said.
References
Derailments in India
Railway accidents in 1998
Railway accidents and incidents in Bihar
History of Bihar (1947–present) | Fatuha train crash | Technology | 205 |
20,727,556 | https://en.wikipedia.org/wiki/CatSper1 | CatSper1, is a protein which in humans is encoded by the CATSPER1 gene. CatSper1 is a member of the cation channels of sperm family of protein. The four proteins in this family together form a Ca2+-permeant ion channel specific essential for the correct function of sperm cells.
Function
Calcium ions play a primary role in the regulation of sperm motility. This gene belongs to a family of putative cation channels that are specific to spermatozoa and localize to the flagellum. The protein family features a single repeat with six membrane-spanning segments and a predicted calcium-selective pore region.
References
Further reading
Ion channels | CatSper1 | Chemistry | 138 |
63,551,532 | https://en.wikipedia.org/wiki/AirTag | AirTag is a tracking device developed by Apple. AirTag is designed to act as a key finder, which helps people find personal objects such as keys, bags, apparel, small electronic devices and vehicles. To locate lost items, AirTags use Apple's crowdsourced Find My network, estimated in early 2021 to consist of approximately one billion devices worldwide that detect and anonymously report emitted Bluetooth signals. AirTags are compatible with any iPhone, iPad, or iPod Touch device capable of running iOS/iPadOS 14.5 or later, including iPhone 6S or later (including iPhone SE 1, 2 and 3). Using the built-in U1 chip on iPhone 11 or later (except iPhone SE models), users can more precisely locate items using ultra-wideband (UWB) technology. AirTag was announced on April 20, 2021, made available for pre-order on April 23, and released on April 30.
History
The product was rumored to be under development in April 2019. In February 2020, it was reported that Asahi Kasei was prepared to supply Apple with tens of millions of ultra-wideband (UWB) parts for the rumored AirTag in the second and third quarters of 2020, though the shipment was ultimately delayed. On April 2, 2020, a YouTube video on Apple Support page also confirmed AirTag. In Apple's iOS 14.0 release, code was discovered that described the reusable and removable battery that would be used in the AirTag. In March 2021, Macworld stated that iOS 14.5 beta's Find My user interface included "Items" and "Accessories" features meant for AirTag support for a user's "backpack, luggage, headphones" and other objects. AppleInsider noted that the beta included safety warnings for "unauthorized AirTags" persistently in a user's vicinity.
In May 2024, Bloomberg reported that Apple was preparing a new version of the AirTag, codenamed B589.
Features
AirTags can be interacted with using the Find My app. Users may trigger the AirTag to play a sound from the app. iPhones equipped with the U1 chip can use "Precision Tracking" to provide direction to and precise distance from an AirTag. Precision Tracking utilizes ultra-wideband.
AirTags are not satellite navigation devices. AirTags are located on a map within the Find My app by utilizing Bluetooth signals from other anonymous iOS and iPadOS devices out in the world. To help prevent unwanted tracking, an iOS/iPadOS device will alert their owner if someone else's AirTag seems to be with them, instead of with the AirTag's owner, for too long. If an AirTag is out of range of any Apple device for more than 8 to 24 hours, it will begin to beep to alert a person that an AirTag may have been placed in their possessions.
Users can mark an AirTag as lost and provide a phone number and a message. Any iPhone user can see this phone number and message with the "Identify Found Item" feature within the Find My app, which utilizes near-field communication (NFC) technology. Additionally, Android and Windows 10 Mobile phones with NFC can identify an AirTag with a tap, which will redirect to a website containing the message and phone number.
AirTag requires an Apple ID and iOS or iPadOS 14.5 or later. It uses the CR2032 button cell, replaceable with one year of battery life (though some batteries with child-resistant bitterants cannot be used due to the design of the AirTag battery terminal). The maximum range of Bluetooth tracking is estimated to be around 100 meters. The water-resistance of an AirTag is rated IP67 water and dust; an AirTag can withstand 30 minutes of water immersion in standard laboratory conditions. Each Apple ID is limited to 32 AirTags.
Firmware version history
Apple does not provide a way for users to force an AirTag to carry out a firmware update. Firmware updates may happen automatically whenever an AirTag is in Bluetooth range of the paired iPhone (running iOS 14.5 or later) and both devices have sufficient battery.
Applications
Tracking checked luggage
AirTags have become extremely popular among travelers to track checked luggage on flights and empower them when luggage is lost by carriers. In response, Lufthansa stated that AirTags were not permissible in luggage checked with the carrier. The carrier backtracked after a risk assessment by German risk authorities following widespread criticism and accusations that it was seeking to avoid accountability. The Federal Aviation Administration has ruled that storing AirTags in checked luggage is permitted and not a safety hazard despite containing batteries.
Theft prevention and recovery
AirTags have been used to track stolen property and assist police in recovering them for return to their rightful owners. In February 2023, a North Carolina family discovered that their car had been stolen. In coordination with local police, they utilized an AirTag placed in the vehicle to locate the car and were able to recover their property. Police were reportedly elated at the ease at which they were able to arrest the criminals and recover the property thanks to the AirTags.
Criticism
Use by stalkers
Despite Apple's inclusion of technologies to help prevent unwanted tracking or stalking, The Washington Post found that it was "frighteningly easy" to bypass the systems put in place. It has been described as "a gift to stalkers". Concerns included the built-in audible alarm taking three days to sound (since reduced to 8–24 hours), and the fact that most Americans had Android devices that would not receive alerts about nearby AirTags that iPhone devices receive. AirTags cannot have most of their components replaced correctly, but it has been found that AirTags with their speakers forcibly removed from the rest of the components were being used to track people. The AirTag cannot detect this change, making it harder for people to find out that an AirTag had been stalking them. AirTags with their speakers removed have been found for sale on sites like eBay and Etsy. In January 2022, BBC News spoke to six women who stated that they found unregistered AirTags inside things such as cars and bags.
In late 2021, Apple released an app called Tracker Detect on the Google Play Store to help users of Android 9 or later to discover unknown AirTags near them in a "lost" state and potentially being used for malicious tracking purposes. However, the app does not run in the background.
In February 2022, Apple added a warning for users setting up their AirTag, notifying them that using the device to track people is illegal and the device is only meant for tracking personal belongings. It will take 8–24 hours for an AirTag to chirp if it has been separated from its owner.
Tracking cars
The National Post in Canada reported that AirTags were placed on vehicles at shopping malls and parking lots without the drivers' knowledge, in order to track them to their homes, where the vehicles would be stolen. In response, Apple announced just before WWDC 2021 that it had begun rolling out updates that would allow anyone with an NFC-capable phone to tap an unwanted AirTag for instructions on how to disable it, and that they had decreased the delay time for the audible alert that sounds after the AirTag is separated from its owner from three days to a random time between 8 and 24 hours.
Susceptibility to hacking
Users who set their AirTags to lost mode are prompted to provide a contact phone number for finders to call. In September 2021, security researcher Brian Krebs, citing fellow security researcher Bobby Rauch, reported that the phone number field will actually accept any type of input, including arbitrary computer code, opening up the potential use of AirTags as Trojan horse devices.
Similarity to Tile
Similar product manufacturer Tile criticized Apple for using similar technologies and designs to Tile's trackers. Spokespeople for Tile made a testimony to the United States Congress saying that Apple was supporting "anti-competitive practices", claiming that Apple had done this in the past, and that they think it is "entirely appropriate for Congress to take a closer look at Apple's business practices".
Difficulty attaching
AirTags do not have holes or other mechanical features that would allow them to be positively attached or affixed to the item being tracked; solutions include adhesives (glue, tape) and purpose-built accessories. The polyurethane AirTag Loop is the least expensive solution sold by Apple; it costs the same as a single AirTag and has been criticized as an "accessory tax".
See also
iBeacon
Galaxy SmartTag
List of UWB-enabled devices
References
External links
Apple Inc. hardware
Internet object tracking
Apple Inc. peripherals
IPhone accessories
Products introduced in 2021 | AirTag | Technology | 1,812 |
22,834,255 | https://en.wikipedia.org/wiki/Fermat%20quotient | In number theory, the Fermat quotient of an integer a with respect to an odd prime p is defined as
or
.
This article is about the former; for the latter see p-derivation. The quotient is named after Pierre de Fermat.
If the base a is coprime to the exponent p then Fermat's little theorem says that qp(a) will be an integer. If the base a is also a generator of the multiplicative group of integers modulo p, then qp(a) will be a cyclic number, and p will be a full reptend prime.
Properties
From the definition, it is obvious that
In 1850, Gotthold Eisenstein proved that if a and b are both coprime to p, then:
Eisenstein likened the first two of these congruences to properties of logarithms. These properties imply
In 1895, Dmitry Mirimanoff pointed out that an iteration of Eisenstein's rules gives the corollary:
From this, it follows that:
Lerch's formula
M. Lerch proved in 1905 that
Here is the Wilson quotient.
Special values
Eisenstein discovered that the Fermat quotient with base 2 could be expressed in terms of the sum of the reciprocals modulo p of the numbers lying in the first half of the range {1, ..., p − 1}:
Later writers showed that the number of terms required in such a representation could be reduced from 1/2 to 1/4, 1/5, or even 1/6:
Eisenstein's series also has an increasingly complex connection to the Fermat quotients with other bases, the first few examples being:
Generalized Wieferich primes
If qp(a) ≡ 0 (mod p) then ap−1 ≡ 1 (mod p2). Primes for which this is true for a = 2 are called Wieferich primes. In general they are called Wieferich primes base a. Known solutions of qp(a) ≡ 0 (mod p) for small values of a are:
{| class="wikitable"
|-----
! a
! p (checked up to 5 × 1013)
! OEIS sequence
|-----
| 1 || 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, ... (All primes)
|
|-----
| 2 || 1093, 3511
|
|-----
| 3 || 11, 1006003
|
|-----
| 4 || 1093, 3511
|
|-----
| 5 || 2, 20771, 40487, 53471161, 1645333507, 6692367337, 188748146801
|
|-----
| 6 || 66161, 534851, 3152573
|
|-----
| 7 || 5, 491531
|
|-----
| 8 || 3, 1093, 3511
|
|-----
| 9 || 2, 11, 1006003
|
|-----
| 10 || 3, 487, 56598313
|
|-----
| 11 || 71
|
|-----
| 12 || 2693, 123653
|
|-----
| 13 || 2, 863, 1747591
|
|-----
| 14 || 29, 353, 7596952219
|
|-----
| 15 || 29131, 119327070011
|
|-----
| 16 || 1093, 3511
|
|-----
| 17 || 2, 3, 46021, 48947, 478225523351
|
|-----
| 18 || 5, 7, 37, 331, 33923, 1284043
|
|-----
| 19 || 3, 7, 13, 43, 137, 63061489
|
|-----
| 20 || 281, 46457, 9377747, 122959073
|
|-----
| 21 || 2
|
|-----
| 22 || 13, 673, 1595813, 492366587, 9809862296159
|
|-----
| 23 || 13, 2481757, 13703077, 15546404183, 2549536629329
|
|-----
| 24 || 5, 25633
|
|-----
| 25 || 2, 20771, 40487, 53471161, 1645333507, 6692367337, 188748146801
|
|-----
| 26 || 3, 5, 71, 486999673, 6695256707
|
|-----
| 27 || 11, 1006003
|
|-----
| 28 || 3, 19, 23
|
|-----
| 29 || 2
|
|-----
| 30 || 7, 160541, 94727075783
|
|}
For more information, see and.
The smallest solutions of qp(a) ≡ 0 (mod p) with a = n are:
2, 1093, 11, 1093, 2, 66161, 5, 3, 2, 3, 71, 2693, 2, 29, 29131, 1093, 2, 5, 3, 281, 2, 13, 13, 5, 2, 3, 11, 3, 2, 7, 7, 5, 2, 46145917691, 3, 66161, 2, 17, 8039, 11, 2, 23, 5, 3, 2, 3, ...
A pair (p, r) of prime numbers such that qp(r) ≡ 0 (mod p) and qr(p) ≡ 0 (mod r) is called a Wieferich pair.
References
External links
Gottfried Helms. Fermat-/Euler-quotients (ap-1 – 1)/pk with arbitrary k.
Richard Fischer. Fermat quotients B^(P-1) == 1 (mod P^2).
Number theory | Fermat quotient | Mathematics | 1,416 |
25,810,292 | https://en.wikipedia.org/wiki/Sensistor | Sensistor is a resistor whose resistance changes with temperature.
The resistance increases exponentially with temperature, that is the temperature coefficient is positive (e.g. 0.7% per degree Celsius).
Sensistors are used in electronic circuits for compensation of temperature influence or as sensors of temperature for other circuits.
Sensistors are made by using very heavily doped semiconductors so that their operation is similar to PTC-type thermistors. However, very heavily doped semiconductor behaves more like a metal and the resistance change is more gradual than it is the case for other PTC thermistors.
See also
thermistor
References
Sensors | Sensistor | Technology,Engineering | 138 |
9,708,165 | https://en.wikipedia.org/wiki/Collar%20stay | A collar stay, collar stick, collar bone (British English), collar tab (British English), collar stiffener, or collar stiff is a shirt accessory consisting of a smooth strip of rigid material, rounded at one end and pointed at the other, inserted into specially made pockets on the underside of a shirt collar to stabilize the collar's points. The stays ensure that the collar lies flat against the collarbone, looking crisp and remaining in the correct place.
Collar stays can be made from a variety of materials, including metal (such as brass, stainless steel, or sterling silver), horn, baleen, mother of pearl, or plastic. Shirts often come with plastic stays that may eventually need to be replaced if they bend; metal replacements do not have this problem.
Collar stays can be found in haberdashers, fabric- and sewing-supply stores and men's clothing stores. They are manufactured in multiple lengths to fit different collar designs, or may be designed with a means to adjust the length of the collar stay.
There are many variations to the traditional collar stay. Some metallic collar stays are sold with a magnet, which is used to hold the stiffened collar in place against the shirt. A different type of collar stay discreetly adds a button hook on one end, to help fasten tiny buttons on dress shirts; e.g. placket, cuffs or button down collars. Adhesive collar stays can be stuck to the underside of a collar to either add stiffness or attach the collar points to the shirt.
Collar stays are removed from shirts before dry cleaning or pressing, as the cleaning process can damage both the shirt and the stays; they are replaced prior to wearing. Shirts that are press ironed with the collar stays are vulnerable to damage, as this results in a telltale impression of the collar stay in the fabric of the collar. Some shirts have stays which are sewn into the collar and are not removable.
Some dress shirts are sold with shorter, wider stays than the classic shirt stay (e.g., Tommy Hilfiger). The classic stay will not work with these shirts.
References
Neckwear
Parts of clothing | Collar stay | Technology | 438 |
63,322,305 | https://en.wikipedia.org/wiki/Keeper%20%28chemistry%29 | Keepers are substances (typically solvents, but sometimes adsorbent solids) added in relatively small quantities during an evaporative procedure in analytical chemistry, such as concentration of an analyte-solvent mixture by rotary evaporation. The purpose of a keeper is to reduce losses of a target analyte during the procedure. Keepers typically have reduced volatility and are added to a more volatile solvent.
In the case of volatile target analytes, it is difficult to totally avoid loss of the analyte in an evaporative procedure, but the presence of a keeper solvent or solid is intended to preferentially solvate or adsorb the analyte, so that the volatility of the analyte is reduced as the evaporative procedure continues. In the case of non-volatile target analytes, the presence of the keeper solvent or solid is intended to prevent all the solvent from being evaporated off, thereby preventing the loss of analytes which might irreversibly adsorb to the container walls when completely dried, or if it is totally dried (in the case of a solid keeper), provide a surface where the analyte can be reversibly rather than irreversibly adsorbed. A solid keeper of sodium sulfate has been shown to be effective for reducing losses of polycyclic aromatic hydrocarbons (PAHs) in evaporative procedures.
Solvents commonly used as keepers
The following solvents are commonly used as keepers:
References
Analytical chemistry | Keeper (chemistry) | Chemistry | 306 |
2,470,776 | https://en.wikipedia.org/wiki/Timeline%20of%20particle%20discoveries | This is a timeline of subatomic particle discoveries, including all particles thus far discovered which appear to be elementary (that is, indivisible) given the best available evidence. It also includes the discovery of composite particles and antiparticles that were of particular historical importance.
More specifically, the inclusion criteria are:
Elementary particles from the Standard Model of particle physics that have so far been observed. The Standard Model is the most comprehensive existing model of particle behavior. All Standard Model particles including the Higgs boson have been verified, and all other observed particles are combinations of two or more Standard Model particles.
Antiparticles which were historically important to the development of particle physics, specifically the positron and antiproton. The discovery of these particles required very different experimental methods from that of their ordinary matter counterparts, and provided evidence that all particles had antiparticles—an idea that is fundamental to quantum field theory, the modern mathematical framework for particle physics. In the case of most subsequent particle discoveries, the particle and its anti-particle were discovered essentially simultaneously.
Composite particles which were the first particle discovered containing a particular elementary constituent, or whose discovery was critical to the understanding of particle physics.
See also
List of baryons
List of mesons
List of particles
References
Particle discoveries
Particle physics | Timeline of particle discoveries | Physics | 264 |
37,572,955 | https://en.wikipedia.org/wiki/List%20of%20MorphOS%20bundled%20applications | This is a sub-article to MorphOS.
A number of bundled applications are delivered with the operating system.
MorphOS bundled applications
References
MorphOS bundled applications | List of MorphOS bundled applications | Technology | 36 |
6,854 | https://en.wikipedia.org/wiki/Church%E2%80%93Turing%20thesis | In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:
In 1933, Kurt Gödel, with Jacques Herbrand, formalized the definition of the class of general recursive functions: the smallest class of functions (with arbitrarily many arguments) that is closed under composition, recursion, and minimization, and includes zero, successor, and all projections.
In 1936, Alonzo Church created a method for defining functions called the λ-calculus. Within λ-calculus, he defined an encoding of the natural numbers called the Church numerals. A function on the natural numbers is called λ-computable if the corresponding function on the Church numerals can be represented by a term of the λ-calculus.
Also in 1936, before learning of Church's work, Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs by manipulating symbols on a tape. Given a suitable encoding of the natural numbers as sequences of symbols, a function on the natural numbers is called Turing computable if some Turing machine computes the corresponding function on encoded natural numbers.
Church, Kleene, and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief (see below).
On the other hand, the Church–Turing thesis states that the above three formally-defined classes of computable functions coincide with the informal notion of an effectively calculable function. Although the thesis has near-universal acceptance, it cannot be formally proven, as the concept of effective calculability is only informally defined.
Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind (see below).
Statement in Church's and Turing's words
addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps". Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result".
In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same:
We shall use the expression "computable function" to mean a function calculable by a machine, and let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions.
The thesis can be stated as: Every effectively calculable function is a computable function.
Church also stated that "No computational procedure will be considered as an algorithm unless it can be represented as a Turing Machine".
Turing stated it this way:
It was stated ... that "a function is effectively calculable if its values can be found by some purely mechanical process". We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability with effective calculability. [ is the footnote quoted above.]
History
One of the important problems for logicians in the 1930s was the Entscheidungsproblem of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a definition that "identified" two or more propositions, (iii) an empirical hypothesis to be verified by observation of natural events, or (iv) just a proposal for the sake of argument (i.e. a "thesis")?
Circa 1930–1952
In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". Rather, in correspondence with Church (c. 1934–1935), Gödel proposed axiomatizing the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that:
But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton NJ (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically".
Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the Entscheidungsproblem is unsolvable: there is no algorithm that can determine whether a well formed formula has a beta normal form.
Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions". By 1963–1964 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system".
A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the Entscheidungsproblem is unsolvable) was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating:
Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom". This idea was "sharply" criticized by Church.
Thus Post in his 1936 paper was also discounting Kurt Gödel's suggestion to Church in 1934–1935 that the thesis might be expressed as an axiom or set of axioms.
Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–1937 paper "On Computable Numbers, with an Application to the Entscheidungsproblem" appeared. In it he stated another notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). And in a proof-sketch added as an "Appendix" to his 1936–1937 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made "the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately".
In a few years (1939) Turing would propose, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one. Thus, by 1939, both Church (1934) and Turing (1939) had individually proposed that their "formal systems" should be definitions of "effective calculability"; neither framed their statements as theses.
Rosser (1939) formally identified the three notions-as-definitions:
Kleene proposes Thesis I: This left the overt expression of a "thesis" to Kleene. In 1943 Kleene proposed his "Thesis I":
The Church–Turing Thesis: Stephen Kleene, in Introduction To Metamathematics, finally goes on to formally name "Church's Thesis" and "Turing's Thesis", using his theory of recursive realizability. Kleene having switched from presenting his work in the terminology of Church-Kleene lambda definability, to that of Gödel-Kleene recursiveness (partial recursive functions). In this transition, Kleene modified Gödel's general recursive functions to allow for proofs of the unsolvability of problems in the Intuitionism of E. J. Brouwer. In his graduate textbook on logic, "Church's thesis" is introduced and basic mathematical results are demonstrated to be unrealizable. Next, Kleene proceeds to present "Turing's thesis", where results are shown to be uncomputable, using his simplified derivation of a Turing machine based on the work of Emil Post. Both theses are proven equivalent by use of "Theorem XXX".
Kleene, finally, uses for the first time the term the "Church-Turing thesis" in a section in which he helps to give clarifications to concepts in Alan Turing's paper "The Word Problem in Semi-Groups with Cancellation", as demanded in a critique from William Boone.
Later developments
An attempt to better understand the notion of "effective computability" led Robin Gandy (Turing's student and friend) in 1980 to analyze machine computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, cellular automata (including Conway's game of life), parallelism, and crystalline automata, led him to propose four "principles (or constraints) ... which it is argued, any machine must satisfy". His most-important fourth, "the principle of causality" is based on the "finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instantaneous action at a distance". From these principles and some additional constraints—(1a) a lower bound on the linear dimensions of any of the parts, (1b) an upper bound on speed of propagation (the velocity of light), (2) discrete progress of the machine, and (3) deterministic behavior—he produces a theorem that "What can be calculated by a device satisfying principles I–IV is computable."
In the late 1990s Wilfried Sieg analyzed Turing's and Gandy's notions of "effective calculability" with the intent of "sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework". In his 1997 and 2002 work Sieg presents a series of constraints on the behavior of a computor—"a human computing agent who proceeds mechanically". These constraints reduce to:
"(B.1) (Boundedness) There is a fixed bound on the number of symbolic configurations a computor can immediately recognize.
"(B.2) (Boundedness) There is a fixed bound on the number of internal states a computor can be in.
"(L.1) (Locality) A computor can change only elements of an observed symbolic configuration.
"(L.2) (Locality) A computor can shift attention from one symbolic configuration to another one, but the new observed configurations must be within a bounded distance of the immediately previously observed configuration.
"(D) (Determinacy) The immediately recognizable (sub-)configuration determines uniquely the next computation step (and id [instantaneous description])"; stated another way: "A computor's internal state together with the observed configuration fixes uniquely the next computation step and the next internal state."
The matter remains in active discussion within the academic community.
The thesis as a definition
The thesis can be viewed as nothing but an ordinary mathematical definition. Comments by Gödel on the subject suggest this view, e.g. "the correct definition of mechanical computability was established beyond any doubt by Turing". The case for viewing the thesis as nothing more than a definition is made explicitly by Robert I. Soare, where it is also argued that Turing's definition of computability is no less likely to be correct than the epsilon-delta definition of a continuous function.
Success of the thesis
Other formalisms (besides recursion, the λ-calculus, and the Turing machine) have been proposed for describing effective calculability/computability. Kleene (1952) adds to the list the functions "reckonable in the system S1" of Kurt Gödel 1936, and Emil Post's (1943, 1946) "canonical [also called normal] systems". In the 1950s Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model (see Post–Turing machine). Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into "up-down counters", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960s and early 1970s researchers expanded the counter machine model into the register machine, a close cousin to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky (1953, 1958): "... they just wanted to ... convince themselves that there is no way to extend the notion of computable function."
All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of "effective calculability/computability" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel (1936) proposed something stronger than this; he observed that there was something "absolute" about the concept of "reckonable in S1":
Informal usage in proofs
Proofs in computability theory often invoke the Church–Turing thesis in an informal way to establish the computability of functions while avoiding the (often very long) details which would be involved in a rigorous, formal proof. To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude "by the Church–Turing thesis" that the function is Turing computable (equivalently, partial recursive).
Dirk van Dalen gives the following example for the sake of illustrating this informal use of the Church–Turing thesis:
In order to make the above example completely rigorous, one would have to carefully construct a Turing machine, or λ-function, or carefully invoke recursion axioms, or at best, cleverly invoke various theorems of computability theory. But because the computability theorist believes that Turing computability correctly captures what can be computed effectively, and because an effective procedure is spelled out in English for deciding the set B, the computability theorist accepts this as proof that the set is indeed recursive.
Variations
The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the physical Church–Turing thesis states: "All physically computable functions are Turing-computable."
The Church–Turing thesis says nothing about the efficiency with which one model of computation can simulate another. It has been proved for instance that a (multi-tape) universal Turing machine only suffers a logarithmic slowdown factor in simulating any Turing machine.
A variation of the Church–Turing thesis addresses whether an arbitrary but "reasonable" model of computation can be efficiently simulated. This is called the feasibility thesis, also known as the (classical) complexity-theoretic Church–Turing thesis or the extended Church–Turing thesis, which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states: "A probabilistic Turing machine can efficiently simulate any realistic model of computation." The word 'efficiently' here means up to polynomial-time reductions. This thesis was originally called computational complexity-theoretic Church–Turing thesis by Ethan Bernstein and Umesh Vazirani (1997). The complexity-theoretic Church–Turing thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time (BPP) equals deterministic polynomial time (P), the word 'probabilistic' is optional in the complexity-theoretic Church–Turing thesis. A similar thesis, called the invariance thesis, was introduced by Cees F. Slot and Peter van Emde Boas. It states: Reasonable' machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space." The thesis originally appeared in a paper at STOC'84, which was the first paper to show that polynomial-time overhead and constant-space overhead could be simultaneously achieved for a simulation of a Random Access Machine on a Turing machine.
If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis, since a quantum computer can always be simulated by a Turing machine, but it would invalidate the classical complexity-theoretic Church–Turing thesis for efficiency reasons. Consequently, the quantum complexity-theoretic Church–Turing thesis states: "A quantum Turing machine can efficiently simulate any realistic model of computation."
Eugene Eberbach and Peter Wegner claim that the Church–Turing thesis is sometimes interpreted too broadly,
stating "Though [...] Turing machines express the behavior of algorithms, the broader assertion that algorithms precisely capture what can be computed is invalid". They claim that forms of computation not captured by the thesis are relevant today,
terms which they call super-Turing computation.
Philosophical implications
Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain. There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings:
The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics.
The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves random real numbers, as opposed to computable reals, would fall into this category.
The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. (They are not necessarily efficiently equivalent; see above.) John Lucas and Roger Penrose have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation.
There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept.
Philosophical aspects of the thesis, regarding both physical and biological computers, are also discussed in Odifreddi's 1989 textbook on recursion theory.
Non-computable functions
One can formally define functions that are not computable. A well-known example of such a function is the Busy Beaver function. This function takes an input n and returns the largest number of symbols that a Turing machine with n states can print before halting, when run with no input. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church–Turing thesis states that this function cannot be effectively computed by any method.
Several computational models allow for the computation of (Church-Turing) non-computable functions. These are known as
hypercomputers.
Mark Burgin argues that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis. His argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church–Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church–Turing thesis has not found broad acceptance within the computability research community.
See also
Abstract machine
Church's thesis in constructive mathematics
Church–Turing–Deutsch principle, which states that every physical process can be simulated by a universal computing device
Computability logic
Computability theory
Decidability
Hypercomputation
Model of computation
Oracle (computer science)
Super-recursive algorithm
Turing completeness
Footnotes
References
Includes original papers by Gödel, Church, Turing, Rosser, Kleene, and Post mentioned in this section.
Cited by .
Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in ) and name it "Church's Thesis" (i.e., the Church thesis).
and (See also: )
External links
.
—a comprehensive philosophical treatment of relevant issues.
A special issue (Vol. 28, No. 4, 1987) of the Notre Dame Journal of Formal Logic was devoted to the Church–Turing thesis.
Computability theory
Alan Turing
Theory of computation
Philosophy of computer science | Church–Turing thesis | Mathematics,Technology | 5,222 |
39,067,728 | https://en.wikipedia.org/wiki/Squalene/phytoene%20synthase%20family | The squalene/phytoene synthase family represents proteins that catalyze the head-to-head condensation of C15 and C20 prenyl units (i.e. farnesyl diphosphate and genranylgeranyl diphosphate). This enzymatic step constitutes part of steroid and carotenoid biosynthesis pathway. Squalene synthase EC (SQS) and Phytoene synthase EC (PSY) are two well-known examples of this protein family and share a number of functional similarities. These similarities are also reflected in their primary structure. In particular three well conserved regions are shared by SQS and PSY; they could be involved in substrate binding and/or the catalytic mechanism. SQS catalyzes the conversion of two molecules of farnesyl diphosphate (FPP) into squalene. It is the first committed step in the cholesterol biosynthetic pathway. The reaction carried out by SQS is catalyzed in two separate steps: the first is a head-to-head condensation of the two molecules of FPP to form presqualene diphosphate; this intermediate is then rearranged in a NADP-dependent reduction, to form squalene:
2 FPP -> presqualene diphosphate + NADP -> squalene
SQS is found in all three domains of life; eukaryotes, archaea (haloarchaea) and bacteria. A recent phylogenetic analysis suggests a bacterial origin of SQS and a later horizontal transfer of the SQS gene to a common ancestor of eukaryotes. Some bacteria are known to alternatively possess a set of three genes to biosynthesize squalene (HpnCDE). HpnC and HpnD are homologous to each other and seem to have co-evolved in HpnCDE-containing species, together with HpnE. HpnCD are further homologous to SQS and PSY and thus are members of the squalene/phytoene synthase family. HpnCD and SQS are inferred to have evolved independently from a PSY homolog.
In yeast, SQS is encoded by the ERG9 gene, in mammals by the FDFT1 gene. SQS is membrane-bound.
PSY catalyzes the conversion of two molecules of geranylgeranyl diphosphate (GGPP) into phytoene. It is the second step in the biosynthesis of carotenoids from isopentenyl diphosphate. The reaction carried out by PSY is catalyzed in two separate steps: the first is a head-to-head condensation of the two molecules of GGPP to form prephytoene diphosphate; this intermediate is then rearranged to form phytoene.
2 GGPP -> prephytoene diphosphate -> phytoene
PSY is found in all organisms that synthesize carotenoids: plants and photosynthetic bacteria as well as some non-photosynthetic bacteria and fungi. In bacteria PSY is encoded by the gene crtB. In plants PSY is localized in the chloroplast. While PSY/CrtB catalyzes the head-to-head condensation for the C20 prenyl unit (GGPP), a group of homologous proteins labelled as CrtM catalyze the same enzymatic reaction for the C15 unit (FPP). The product of two FPP condensation is dehydrosqualene (diapophytoene). CrtB and CrtM share a common ancestry, but it is not known which evolved first.
While the substrates FPP and GGPP are amphipathic, the products squalene, dehydrosqualene and phytoene are all hydrophobic. Thus, the subsequent enzymatic steps in the steroid and carotenoid biosynthesis take place in the cellular membranes of host organisms.
References
Protein domains
Protein families
EC 2.5.1 | Squalene/phytoene synthase family | Biology | 876 |
37,465,496 | https://en.wikipedia.org/wiki/T%20Scorpii | T Scorpii, or Nova Scorpii 1860, was a nova in the globular cluster Messier 80 (M80). It was discovered on 21 May 1860 by Arthur von Auwers at Koenigsberg Observatory and was independently discovered by Norman Pogson on May 28 at Hartwell observatory. It was at magnitude 7.5 at discovery, reaching a maximum of magnitude 6.8, outshining the whole cluster.
T Scorpii was the first nova ever observed in any type of star cluster.
As of 2019 it was still the only classical nova known for certain to have occurred in a globular cluster. T Scorpii faded by more than 3 magnitudes in 26 days, which means it was a "fast nova". Auwers reported that he had been observing M80 frequently since the beginning of 1859, and the nova was not visible when he observed it on 18 May 1860, 3 days before he first saw the nova. The nova was located less than 3 arc seconds from the center of M80. Astronomers recognized the significance of this object and for at least seven years after its discovery they closely monitored M80's appearance, but the star was never seen again by 19th century observers.
In 1995 Shara and Drissen announced that they had identified the quiescent nova using Hubble Space Telescope images, however in 2010 Dieball et al. identified a different star as the quiescent nova, based on ultraviolet and X-ray observations. Subsequent publications support the Dieball et al. identification.
References
External links
Novae
Scorpius
1860 in science
Scorpii, T | T Scorpii | Astronomy | 339 |
12,917,243 | https://en.wikipedia.org/wiki/Acidilobus | Acidilobus is a genus of archaea in the family Acidilobaceae.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI).
See also
List of Archaea genera
References
Further reading
Scientific journals
Archaea genera
Thermoproteota | Acidilobus | Biology | 79 |
20,124,373 | https://en.wikipedia.org/wiki/Andreas%20Acrivos | Andreas Acrivos (born 13 June 1928) is the Albert Einstein Professor of Science and Engineering, emeritus at the City College of New York. He is also the director of the Benjamin Levich Institute for Physicochemical Hydrodynamics.
Education and career
Born in Athens, Greece, Acrivos moved to the United States to pursue an engineering education. He received a bachelor's degree from Syracuse University in 1950; a master's degree from the University of Minnesota in 1951; and a Ph.D. from the University of Minnesota in 1954; all in chemical engineering.
Acrivos is considered to be one of the leading fluid dynamicists of the 20th century. In 1954 Acrivos joined the faculty at the University of California, Berkeley. In 1962, he moved to Stanford University, where he worked with Professor David Mason to build chemical engineering programs. In 1977, he was elected as a member into the National Academy of Engineering for contributions in the application of mathematical analysis to the understanding of fundamental phenomena in chemical engineering processes. In 1987, Acrivos joined as the Albert Einstein Professor of Science and Engineering at The City College of the City University of New York, succeeding Veniamin Levich.
From 1982 to 1997, Acrivos served as the editor-in-chief of Physics of Fluids.
Awards and honors
National Medal of Science, 2001
Fellow of the American Academy of Arts and Sciences, 1993
Fluid Dynamics Prize, 1991
G. I. Taylor Medal, Society of Engineering Science, 1988
Elected as a member into the National Academy of Engineering, 1977
Acrivos has been listed as an ISI Highly Cited Author in Engineering by the ISI Web of Knowledge, Thomson Scientific Company.
References
External links
1928 births
Living people
City College of New York faculty
Fluid dynamicists
Greek emigrants to the United States
20th-century Greek physicists
Members of the United States National Academy of Sciences
Members of the United States National Academy of Engineering
National Medal of Science laureates
Engineers from Athens
Stanford University School of Engineering faculty
University of Minnesota College of Science and Engineering alumni
UC Berkeley College of Engineering faculty
Fellows of the American Physical Society
Fellows of the American Academy of Arts and Sciences
American chemical engineers
Fellows of Clare Hall, Cambridge
Minnesota CEMS
Physics of Fluids editors | Andreas Acrivos | Chemistry | 452 |
64,713,671 | https://en.wikipedia.org/wiki/Closed%20linear%20operator | In functional analysis, a branch of mathematics, a closed linear operator or often a closed operator is a linear operator whose graph is closed (see closed graph property). It is a basic example of an unbounded operator.
The closed graph theorem says a linear operator between Banach spaces is a closed operator if and only if it is a bounded operator. Hence, a closed linear operator that is used in practice is typically only defined on a dense subspace of a Banach space.
Definition
It is common in functional analysis to consider partial functions, which are functions defined on a subset of some space
A partial function is declared with the notation which indicates that has prototype (that is, its domain is and its codomain is )
Every partial function is, in particular, a function and so all terminology for functions can be applied to them. For instance, the graph of a partial function is the set
However, one exception to this is the definition of "closed graph". A function is said to have a closed graph if is a closed subset of in the product topology; importantly, note that the product space is and as it was defined above for ordinary functions. In contrast, when is considered as an ordinary function (rather than as the partial function ), then "having a closed graph" would instead mean that is a closed subset of If is a closed subset of then it is also a closed subset of although the converse is not guaranteed in general.
Definition: If and are topological vector spaces (TVSs) then we call a linear map a closed linear operator if its graph is closed in .
Closable maps and closures
A linear operator is in if there exists a containing and a function (resp. multifunction) whose graph is equal to the closure of the set in Such an is called a closure of in , is denoted by and necessarily extends
If is a closable linear operator then a or an of is a subset such that the closure in of the graph of the restriction of to is equal to the closure of the graph of in (i.e. the closure of in is equal to the closure of in ).
Examples
A bounded operator is a closed operator. Here are examples of closed operators that are not bounded.
If is a Hausdorff TVS and is a vector topology on that is strictly finer than then the identity map a closed discontinuous linear operator.
Consider the derivative operator where is the Banach space of all continuous functions on an interval
If one takes its domain to be then is a closed operator, which is not bounded.
On the other hand, if is the space of smooth functions scalar valued functions then will no longer be closed, but it will be closable, with the closure being its extension defined on
Basic properties
The following properties are easily checked for a linear operator between Banach spaces:
If is closed then is closed where is a scalar and is the identity function;
If is closed, then its kernel (or nullspace) is a closed vector subspace of ;
If is closed and injective then its inverse is also closed;
A linear operator admits a closure if and only if for every and every pair of sequences and in both converging to in , such that both and converge in , one has .
References
Linear operators | Closed linear operator | Mathematics | 666 |
3,833,945 | https://en.wikipedia.org/wiki/Disk-covering%20method | A disk-covering method is a divide-and-conquer meta-technique for large-scale phylogenetic analysis which has been shown to improve the performance of both heuristics for NP-hard optimization problems and polynomial-time distance-based methods. Disk-covering methods are a meta-technique in that they have flexibility in several areas, depending on the performance metrics that are being optimized for the base method. Such metrics can be efficiency, accuracy, or sequence length requirements for statistical performance. There have been several disk-covering methods developed, which have been applied to different "base methods". Disk-covering methods have been used with distance-based methods (like neighbor joining) to produce "fast-converging methods", which are methods that will reconstruct the true tree from sequences that have at most a polynomial number of sites.
A disk-covering method has four steps:
Decomposition: Compute a decomposition of the dataset into overlapping subsets.
Solution: Construct trees on the subsets using a base method.
Merge: Use a supertree method to merge the trees on the subsets into a tree on the full dataset.
Refinement: If the tree obtained in the merge is not fully resolved, then resolve it further into a binary tree so that it optimizes some desired objective criterion.
The major use of any disk-covering method is that of the "Rec-I-DCM3" disk-covering method, which has been used to speed-up maximum likelihood and maximum parsimony analyses, and are available through the NSF-funded CIPRES project (www.phylo.org). However, disk-covering methods have also been used for estimating evolutionary trees from gene order data
References
Further reading
T. Warnow. 2005. Large-scale phylogenetic reconstruction. Book chapter, in S. Aluru (editor), Handbook of Computational Biology, Chapman & Hall, CRC Computer and Information Science Series, December 2005.
Computational phylogenetics | Disk-covering method | Chemistry,Biology | 407 |
58,876,063 | https://en.wikipedia.org/wiki/Evaporative%20cooling%20chambers | Evaporative cooling chambers (ECCs), also known as "zero energy cool chambers" (ZECCs), are a type of evaporative cooler, which are simple and inexpensive ways to keep vegetables fresh without the use of electricity. Evaporation of water from a surface removes heat, creating a cooling effect, which can improve vegetable storage shelf life.
ECCs are relatively large compared to the more common household clay pot cooler, and are therefore most suitable for farmers with large production quantities, farming groups, or farming cooperatives.
History
The brick ECC was originally developed in India by Susanta K. Roy and D.S. Khuridiya in the early 1980s to address fruit and vegetable post-harvest losses, especially in rural areas where electricity is non-existent. Roy and Khuridiya’s ECC design is composed of a double brick wall structure, supported by a base layer of brick, and covered with a straw mat.
Suitability
ECCs provide the most benefits when they are used in low humidity climates (less than 40% relative humidity), the temperature is hot (maximum daily temperature greater than 25 °C), water is available to add to the device between one and three times per day. The device should be in a shady and well-ventilated area.
Additionally, storage conditions must meet users’ needs for scale of storage needed and optimal conditions for different vegetables throughout the year. The cost of the ECC must be affordable and justified by the benefits be realized due to its improved storage.
Construction
The size of an ECC can be chosen to meet a range of user storage needs; however, the cost can vary significantly based on the desired size and local cost of materials. Because ECCs can be constructed over a range of sizes, it is important to select an appropriate size according to the need to avoid over-building and spending more money than is needed.
Evaporative cooling chambers (ECCs) can be made from locally available materials, including bricks, sand, wood, dry grass, gunny/burlap sacks, and twine. The space in between the two brick walls is filled with sand, which retains the water that is added. If the evaporative cooling chamber is not built in an area that is well shaded, a shed must be constructed to provide shade. Inside the ECC, food is placed in unsealed plastic containers, which keep the vegetables off the ECC’s floor and allows them to breathe and be exposed to the cool, humid air inside the device.
Best Practices for Use
It is important that ECCs are correctly used to ensure maximum cooling performance benefit for the user. Improper use decreases the potential benefits and results in a lower cost-benefit ratio. The vegetables that need storage should be carefully considered, since not all produce can be stored together because some release ethylene, which can accelerate ripening or reduce post-harvest quality.
Before starting to build an ECC, a location should be chosen that is close to water, exposed to wind/breeze, and if possible, where there is shade to avoid the need of a cover. ECCs should be reinstalled every 3 years with new bricks. The cover of the ECC should be opened as infrequently as possible to keep the cool air in. The sand between the bricks must be kept wet; installing an irrigation system can make this process simpler. Additionally, water should be sprinkled on the cover 1-3 times per day.
Sources
References
Cooling technology
Evaporators | Evaporative cooling chambers | Chemistry,Engineering | 717 |
35,263,118 | https://en.wikipedia.org/wiki/Philip%20Hartman | Philip Hartman (May 16, 1915 – August 28, 2015) was an American mathematician at Johns Hopkins University working on differential equations who introduced the Hartman–Grobman theorem. He served as Chairman of the Mathematics Department at Johns Hopkins for several years. He has an Erdös number of 2.
His book gives a necessary and sufficient condition for solutions of ordinary initial value problems to be unique and to depend on a class C1 manner on the initial conditions for solutions.
He died in August 2015 at the age of 100.
Publications
References
External links
1915 births
2015 deaths
Educators from Baltimore
20th-century American mathematicians
American men centenarians
Johns Hopkins University alumni
Johns Hopkins University faculty
Dynamical systems theorists
American mathematical analysts | Philip Hartman | Mathematics | 141 |
2,956,374 | https://en.wikipedia.org/wiki/Navigational%20Aids%20for%20the%20History%20of%20Science%2C%20Technology%2C%20and%20the%20Environment%20Project | The Navigational Aids for the History of Science, Technology, and the Environment Project (NAHSTE) was a research archives/manuscripts cataloguing project based at the University of Edinburgh. Following a proposal led by Arnott Wilson in 1999, the project received £261,755 funding from the Research Support Libraries Programme (RSLP) from 2000 until 2002.
The project was designed to access a variety of outstanding collections of archives and manuscripts held at the three partner Higher Education Institutions (HEIs); the University of Edinburgh, University of Glasgow and Heriot-Watt University and to make them accessible on the Internet. The project additionally included linkages to related records held by non-HEI collaborators.
Descriptions of the material conform to ISAD(G) (Second edition), whilst information about key individuals conform to ISAAR(CPF). Catalogues were tagged using the Encoded Archival Description XML standard.
Although the project was completed in 2002, the resulting web service continues to be hosted at Edinburgh.
References
External links
- homepage with links to online collections.
Index (publishing)
University of Edinburgh
Open-access archives | Navigational Aids for the History of Science, Technology, and the Environment Project | Technology | 227 |
24,189,759 | https://en.wikipedia.org/wiki/C7H6O5 | {{DISPLAYTITLE:C7H6O5}}
The molecular formula C7H6O5 (molar mass: 170.12 g/mol, exact mass: 170.021523 u) may refer to:
Gallic acid, a phenolic compound
Phloroglucinol carboxylic acid, a phenolic compound
Molecular formulas | C7H6O5 | Physics,Chemistry | 80 |
4,971,672 | https://en.wikipedia.org/wiki/X%20Caeli | X Caeli, or Gamma2 Caeli, is a binary star system in the southern constellation of Caelum. It is barely visible to the naked eye with an apparent visual magnitude of 6.32. based upon an annual parallax shift of , it is located 341 light-years from Earth. The system is moving further away with a heliocentric radial velocity of +6 km/s.
The yellow-white-hued primary, component A, has an apparent magnitude of +6.32 and stellar classification of F2 IV/V, showing mixed traits of an F-type main-sequence star and a subgiant. It is classified as a Delta Scuti-type variable star and its brightness varies from magnitude +6.28 to +6.39 with a period of 3.25 hours. A 2000 observing campaign identified at least six independent pulsation modes for this variation. The companion star, component B, has an apparent magnitude of +9.65 and, as of 2000, is at an angular separation of along a position angle of 183°.
References
External links
HR 1653
Image Gamma2 Caeli
F-type giants
Delta Scuti variables
Binary stars
Caelum
Caeli, Gamma2
Durchmusterung objects
032846
023596
1653
Caeli, X | X Caeli | Astronomy | 268 |
26,623,596 | https://en.wikipedia.org/wiki/Boulder%20Climate%20Action%20Plan | The Climate Action Plan (CAP) in Boulder, Colorado, is a set of strategies intended to guide community efforts for reducing greenhouse gas emissions. These strategies have focused on improving energy efficiency and conservation in homes and businesses—the source of nearly three-fourths of local emissions. The plan also promotes strategies to reduce emissions from transportation, which account for over 20 percent of local greenhouse gas sources.
General information
In November 2006, citizens of Boulder, Colorado, voted to approve Ballot Issue No. 202, authorizing the city council to levy and collect an excise tax from residential, commercial and industrial electricity customers for the purpose of funding a climate action plan to reduce greenhouse gas emissions. The plan outlines programs to increase energy efficiency, increase renewable energy use, reduce emissions from motor vehicles, and take other steps toward meeting the goals set in the Kyoto Protocol.
Beginning April 1, 2007, and expiring March 31, 2013, the initial tax rate was set at $0.0022/kWh for residential customers, $0.0004/kWh for commercial customers, and $0.0002/kWh for industrial customers. The city council has the authority to increase the tax after the first year up to a maximum permitted tax rate of $0.0049/kWh for residential customers; $0.0009/kWh for commercial customers; and $0.0003/kWh for industrial customers. Voluntary purchases of utility-provided wind power are exempt from the tax.
Allocation and generation of fund
Charge:March 2010 rates for electricity customers:
Total Fund: $860,265 in the first year and up to $1,342,000/year thereafter through March 31, 2013Purpose: Renewable energy, energy efficiency, transportation.
Incentive authority
Authority 1: Ballot Issue 202 (Climate Action Plan Tax)Date Enacted:11/7/2006
Authority 2: Boulder Revised Code 3-12Date Effective: 4/1/2007Expiration Date: 3/31/2013
See also
Carbon pricing
Global Action Plan
Transition town
Greenhouse gas emissions by the United States
Chicago Climate Action Plan
San Francisco Climate Action Plan
Presidential Climate Action Plan
References
External links
Boulder's Climate Commitment (City of Boulder)
Climate action in Boulder County
DSIRE Database of State Incentives for Renewables & Efficiency.
Clean air reference website
BAAQMD phone numbers – including 800-EXHAUST (800-394-2878) to report auto exhaust pollution
Air pollution
Emissions reduction
Climate action plans
Boulder, Colorado
Environment of Colorado | Boulder Climate Action Plan | Chemistry | 506 |
50,893,646 | https://en.wikipedia.org/wiki/Floridean%20starch | Floridean starch is a type of a storage glucan found in glaucophytes and in red algae (or rhodophytes), in which it is usually the primary sink for fixed carbon from photosynthesis. It is found in grains or granules in the cell's cytoplasm and is composed of an α-linked glucose polymer with a degree of branching intermediate between amylopectin and glycogen, though more similar to the former. The polymers that make up floridean starch are sometimes referred to as "semi-amylopectin".
Properties
Floridean starch consists of a polymer of glucose molecules connected primarily by α(1,4) linkages, with occasional branch points using α(1,6) linkages. It differs from other common α-linked glucose polymers in the frequency and position of the branches, which gives rise to different physical properties. The structure of floridean starch polymers is most similar to amylopectin and is sometimes described as "semi-amylopectin". Floridean starch is often described in contrast to starch (a mixture of amylopectin and amylose) and glycogen:
Historically, floridean starch has been described as lacking amylose. However, amylose has been identified as a component of floridean starch granules in some cases, particularly in unicellular red algae.
Evolution
Features such as UDP-glucose building blocks and cytosolic storage differentiate the Archaeplastida into two groups: the rhodophytes and glaucophytes, which use floridean starch, and the green algae and plants (Chloroplastida), which use amylopectin and amylose. There is strong phylogenomic evidence that the Archaeplastida are monophyletic and originate from a single primary endosymbiosis event involving a heterotrophic eukaryote and a photosynthetic cyanobacterium.
Evidence indicates that both ancestors would have had established mechanisms for carbon storage. Based on review of the genetic complement of modern plastid genomes, the last common ancestor of the Archaeplastida is hypothesized to have possessed a cytosolic storage mechanism and to have lost most of the endosymbiotic cyanobacterium's corresponding genes. According to this hypothesis, the rhodophytes and glaucophytes retained the ancestral eukaryote's cytosolic starch deposition. Starch synthesis and degradation in green algae and plants is much more complex – but significantly, many of the enzymes that perform these metabolic functions in the interior of modern plastids are identifiably of eukaryotic rather than bacterial origin.
In a few cases, red algae have been found to use cytosolic glycogen rather than floridean starch as a storage polymer; examples such as Galdieria sulphuraria are found in the Cyanidiales, which are unicellular extremophiles.
Other organisms whose evolutionary history suggests secondary endosymbiosis of a red alga also use storage polymers similar to floridean starch, for example, dinoflagellates and cryptophytes. The presence of floridean starch-like storage in some apicomplexan parasites is one piece of evidence supporting a red alga ancestry for the apicoplast, a non-photosynthetic organelle.
History
Floridean starch is named for a class of red algae, the Florideae (now usually termed Florideophyceae). It was first identified in the mid-19th century and extensively studied by biochemists in the mid-20th century.
References
Starch
Red algae | Floridean starch | Biology | 821 |
18,242,141 | https://en.wikipedia.org/wiki/List%20of%20quasiparticles | This is a list of quasiparticles and collective excitations used in condensed matter physics.
List
References
Physics-related lists
it:Quasiparticella#Lista delle quasiparticelle | List of quasiparticles | Physics,Materials_science | 42 |
50,394,967 | https://en.wikipedia.org/wiki/Fernando%20Brand%C3%A3o | Fernando Brandão (born 22 January 1983, Belo Horizonte, Brazil) is a Brazilian physicist and computer scientist working on quantum information and quantum computation. He is currently the Bren Professor of Theoretical Physics at the California Institute of Technology and Director of Quantum Applications at Amazon Web Services. Previously, he was a researcher at Microsoft and a reader in Computer Science at University College London.
He is an editor of the journal Physics Reports. He was awarded the 2013 European Quantum Information Young Investigator Award for "his highly appraised achievements in entanglement theory, quantum complexity theory, and quantum many-body physics, which combine dazzling mathematical ability and impressive physical insight". He was awarded the 2020 American Physical Society Rolf Landauer and Charles H. Bennett award for his contributions to entanglement theory.
References
External links
Academic page
Personal website
CV
Google Scholar page
Brazilian physicists
California Institute of Technology faculty
Quantum physicists
Quantum information scientists
1983 births
Living people | Fernando Brandão | Physics | 190 |
73,263,540 | https://en.wikipedia.org/wiki/Katsarosite | Katsarosite is a rarely occurring mineral from the mineral class of organic compounds with the chemical composition Zn(C2O4)·2H2O and is therefore a water-containing zinc(II) oxalate or the zinc salt of oxalic acid.
Katsarosite is categorized in the humboldtine group as the Zn analogue of humboldtine (Fe(C2O4)·2H2O). It is the second Zn-bearing oxalate mineral after alterite.
Katsarosite crystallizes in the monoclinic crystal system and appears as crystals that are mostly fine granular to earthy, usually rounded with an average diameter of 30 μm. The color depends on the iron (Fe2+) content, ranging from pure white to yellow in Fe-rich specimens.
The mineral is named after Īraklīs Katsaros (ΗΡΑΚΛΗΣ ΚΑΤΣΑΡΟΣ) of Lavrion who has led as guide a large number of scientific archaeological and mineralogical sampling tours through the ancient mining system of Lavrion. His help enabled more than 100 publications in the field of archaeology/mining history and mineralogy/geology. His assistance is further acknowledged in dozens of peer-reviewed scientific papers.
Type material is deposited in the collections of the Institut für Mineralogie und Kristallographie der Universität Wien, Althanstrasse 14, 1090 Vienna, Austria, catalogue number HS13.977 (holotype); the Mineralogical Museum of Lavrio, Andrea Kordella Ave., 19500 Lavrio, Greece, catalogue number T3201 (cotype).
References
External links
Mineralien Atlas
lavrion.gr: Katsarosite
Mindat.org
Webmineral.com
Organic minerals
Monoclinic minerals
Oxalate minerals
Zinc minerals | Katsarosite | Chemistry | 388 |
58,134,314 | https://en.wikipedia.org/wiki/Maurice%20Brodie | Maurice Brodie (1903–1939) was a British-born American virologist who developed a polio vaccine in 1935.
Early years and education
Brodie was born in Liverpool, England, the son of Samuel Broude and Esther Ginsburg. The family immigrated to Ottawa, Canada, in 1910. Maurice graduated from Lisgar Collegiate Institute and McGill University Faculty of Medicine, Alpha Omega Alpha, in 1928; he was named a Wood Gold Medalist. He served as a medical intern, and in 1931 he received a Master of Science degree in physiology from McGill. Brodie belonged to the McGill chapter of Sigma Alpha Mu, and had been a staff reporter of the Ottawa Citizen, 1927–1928.<ref>Dr. Maurice Brodie dies in Detroit. The Gazette (Montreal). 12 May 1939</ref> At McGill 1932 he received a grant from the Banting Research Foundation for his studies of polio.
Polio research
Maurice Brodie joined the New York City Health Department and the bacteriology department at New York University Medical College.
In 1935, Brodie demonstrated induction of immunity in monkeys with inactivated polio virus. Isabel Morgan demonstrated the same phenomenon again a decade later.
Brodie was head of one of two separate teams that developed polio vaccines and reported their results at the annual meeting of the American Public Health Association in November 1935. Both projects were cancelled as a result of complications from vaccine trials resulting in the death of 6 participants and the paralysis of 10 others. The resulting public outrage delayed further research on the polio vaccine until the 1950s, when the Salk and Sabin vaccines were produced.
John Kolmer, of Temple University in Philadelphia, presented his findings first. He had developed an attenuated poliovirus vaccine, which he tested in about 10,000 children across much of the United States and Canada. Five of these children died of polio and 10 more were paralyzed, usually in the arm where the vaccine was injected, and frequently affecting children in towns where no polio outbreak had occurred. He had no control group, but asserted that many more children would have gotten sick. The response from other researchers was uncharacteristically blunt; one of them directly called Kolmer a murderer.
Brodie presented his results afterwards, but the feelings of the researchers were already unfavorable before he started because of Kolmer's report. Brodie and his team had prepared a formaldehyde-killed poliovirus vaccine, testing it first on Brodie himself and five co-workers, and eventually on 7,000 children and adults, with another 4,500 people serving as a control group. In the control group, Brodie reported that five out of 4500 developed polio; in the group receiving the vaccine, one out of 7,000 developed polio. This difference is not quite statistically significant, and other researchers believed that the one case was likely caused by the vaccine. Two more possible cases were reported later.
Rockefeller Institute Virologist Thomas Rivers declared that Brodie's vaccine was ineffective, while the safety of Kolmer’s vaccine was in doubt. Dr William Hallock Park, director of the New York City Health Department Research Laboratories, thereupon decided to discontinue development of Brodie's vaccine, which he had sponsored. But some experts felt Brodie's vaccine deserved further study; the case against it was inconclusive and too hastily drawn.
Later career
In 1936, Brodie moved to Detroit, where he became director of laboratories at Providence Hospital and hospital pathologist. He died suddenly while working in his laboratory, 3:45 pm, Tuesday, May 9, 1939. Cause of death was coronary thrombosis. His remains were sent to Ottawa for burial.Burial in Ottawa for Dr. Brodie. Detroit Times Thursday, May 11, 1939 Detroit, MI Page: 3 He was interred in the Jewish Cemetery on Metcalfe Road (now the Jewish Memorial Gardens on Bank Street) in Ottawa.Maurice Brodie, noted scientist, passes at Detroit. Ottawa Citizen 10 May 1939 p1.
Family
Maurice Brodie was a brother of Bernard Beryl Brodie (7 August 1907 – 28 February 1989), a leading researcher on drug therapy.
References
Further reading
Steven Lehrer. Explorers of the Body''. Doubleday 1979, 2006.
1903 births
1939 deaths
American medical researchers
American virologists
New York University faculty
Polio
Vaccinologists
Medical doctors from Liverpool
McGill University Faculty of Medicine alumni
New York University Grossman School of Medicine faculty
Lisgar Collegiate Institute alumni
Health professionals from Merseyside
English emigrants to the United States
English expatriates in Canada | Maurice Brodie | Biology | 927 |
1,856,606 | https://en.wikipedia.org/wiki/Expression%20pedal | An expression pedal is an important control found on many musical instruments including organs, electronic keyboards, and pedal steel guitar. The musician uses the pedal to control different aspects of the sound, commonly volume. Separate expression pedals can often be added to a guitar amplifier or effects unit and used to control many different aspects of the tone.
Because the source of power with a pipe organ and electronic organs is not generated by the organist, the volume of these instruments has no relationship with how hard its keys or pedals are struck; i.e., the organ produces the same volume whether the key or pedal is depressed gently or firmly. Moreover, the tone will remain constant in pitch, volume, and timbre until the key or pedal is lifted, at which point the sound stops. The expression pedal gives the organist control over the external source of power, and thus the volume, of the instrument, while leaving the user's hands free.
This system of dynamic control is completely distinct from the act of adding stops (in the case of pipe organs) or pulling more drawbars (in the case of organs and synthesizers). Furthermore, the expression pedal can influence the volume (and, to a lesser degree, the timbre) of a note while it is being played; unlike other instruments, in which the note typically decays after it is first sounded, the organist can increase the strength of a chord or note as it sounds by increasing pressure on the expression pedal.
An organ expression pedal is typically a large pedal, resembling an oversized automobile accelerator, either partially or fully recessed within the organ console and located either directly above or to the right of the organ's pedalboard. As the pedal is pressed forward with the toes, the volume of the sound is increased; as it is depressed with the heel, the volume is decreased. A stand-alone expression pedal used with electronic keyboards, amplifiers, and effects is usually a smaller pedal made of metal or plastic that can be placed on the floor and then connected to the device with an instrument cable.
Pipe organs
Beginning in the nineteenth century, it became common for one or more divisions of pipes in a pipe organ to be enclosed in a wooden box, at least one side of which would consist of palettes that open and close in a manner similar to a Venetian blind. A mechanical (later electrical) mechanism connected the box to a pedal that the organist would use to open and close the shutters, adjusting the perceived loudness of the sound. When the box is shut (or closed), less sound is released into the venue. In American and British organs, the enclosed division is usually named the Swell, and the box surrounding the pipes is usually referred to as the swell box. Thus, the expression pedal is sometimes known as the swell pedal or swell shoe. Sometimes the swell pedal is referred to by its German name, schweller. Larger organs may have two or more expression pedals, allowing the volume of different divisions to be individually controlled.
No matter how well a swell box is designed, the sound of the pipes is altered by their enclosure. Even when the shutters are fully opened, the pipes do not speak as clearly into the room as they would if they were otherwise unenclosed. In some instances and applications, particularly for the performance of Romantic organ music, enclosure in an expression chamber can remove some of the shrillness of organ pipes. Romantic instruments frequently have more than one division (keyboard) under expression.
On pipe organs, the expression pedal should not be confused with the crescendo pedal, which progressively adds stops as it is opened, building from piano to fortissimo.
Ratchet swell lever
Historically, the palettes were manipulated by a ratchet swell, a lever operated by the foot to the side of the console. The lever would fit into two or three different notches, which would lock the position of the lever, and therefore the shutters, in place. To change the position of the shutters, the lever would be kicked sideways to allow it to travel into a new position. The lever was weighted so that its default position was at the top, with the shutters closed. As the lever was lowered, the shutters would open.
Balanced swell pedal
The balanced swell pedal (as pictured above) was developed in the late nineteenth century so that the opening of the box could be fixed at any degree (not just the two or three options of the ratchet swell). This pedal is fitted above the center of the pedalboard. It usually rotates towards and away from the organist through a distance of about 90° from an almost vertical position ("shut") to a near horizontal position ("open"). Because the pedal is balanced, the organist does not need to hold it in position, and it will balance at any point in its travel. In addition, this location for most expression pedals, above the center of the pedalboard, is much more convenient for use by both feet if necessary (although it is usually operated with the right foot).
Comparison: rachet lever vs. balanced pedals
Correspondence to The Musical Times in 1916 debates the merits of both the ratchet lever and balanced pedal systems of expression. One writer suggests that balanced expression pedals are either too sensitive or not sensitive enough and are unable to produce effective sforzandos (though many improvements have been made since this letter was written), and that he knows many organists who are having balanced expression pedals removed. One organist most open to the change suggests that real crescendos and diminuendos are not possible with a ratchet swell lever, as the notches provided are always either just under or just above the required dynamic level. Furthermore, he states that the balanced expression pedal affords the ease of use of either foot, whereas the previous correspondent desired two ratchet levers, one at either side of the pedalboard.
Other expression technologies
In 1933, Aubrey Thompson-Allen created the "Infinite Speed and Gradation Swell Engine," based on the work of Henry Willis III, the grandson of Father Henry Willis. The mechanism allowed for an infinitesimally slow as well as an instantaneous opening and closing of the swell shades. The spring-loaded expression pedal sits in what would be a half-open position on a normal balanced pedal. The mechanism opens the swell shades at a speed relative to how far the expression pedal is pressed. This uncommon device requires a completely different expressive technique than the balanced expression pedal. It is found on very few organs.
Reed organs and harmoniums
Reed organs and harmoniums of the late nineteenth and early twentieth centuries featured a pair of bellows pedals at the base of the instrument. When the pedals were pumped up and down, air was drawn across the organ's reeds, producing sound. This capability made the harmonium widely available to homes and small churches, though the dynamic range tended to be limited (not to mention that the organist would eventually tire from pumping). However, these free-reed organs had several ways of controlling their volume and expression. Unlike a pipe organ with a blower, the wind pressure of the reed organ can be directly controlled by varying the speed the bellows are operated with the feet, providing a means of producing softer or harsher tones. Harmoniums (where the bellows provided a positive pressure) usually had an air reservoir to reduce the effort needed to pump the bellows when many stops were engaged.
However, some instruments had a system to bypass the reservoir if the player wanted more direct control. Reed organs often had a form of swell shutter mechanism. The bellows system and reed ranks were contained in a wooden frame. By covering this frame, a swell box was created. A single shutter in the top of the box, inside the organ case, allowed the volume to be controlled. Since the player's feet were needed to operate the bellows, the swell shutter was controlled by a lever operated by the player's knee. The lever operated horizontally, and the player pushed their knee towards the side of the instrument to open the shutter. The lever returned to the 'closed' position on a spring or a locking mechanism could be engaged to hold the shutter open.
Subsequently, the electrically powered reed organ, and later the electronic organ, allowed the bellows pedals to be replaced with an expression pedal, allowing the organist to effect a more substantial change in volume more easily than was possible with the manually pumped instrument.
Electronic organs
The style of popular organ music of the 20th and 21st centuries such as jazz is highly dynamic and requires constant use of the expression pedal in a fashion very different from that of classical organ literature. This tendency increased with the arrival of spinet organs and modern synthesizers, which offset the expression pedal and reduced the size of the pedalboard. These changes allowed the organist to keep the right foot constantly on the expression pedal, while playing the pedalboard with only the left foot. This ability encouraged organists to operate the expression pedal more often during playing. To take advantage of this style of playing, some expression pedals on modern electronic organs are equipped with toe switches, which allow the organist to make quick registration changes without removing the foot from the pedal.
Expression pedals may be non-linear in response, meaning that slight pressure changes may cause a greater proportional change in volume than a more complete depression. In this regard, each organ tends to be rather distinct.
Guitars and digital effects
While most electronic effects units for electric guitar use footswitches to turn an effect on and off, some guitar effects, such as wah, vibrato, and swell, use a variable treadle-style pedal control similar to a keyboard expression pedal. Historically, these have been built into dedicated pedals for each effect, but modern digital amplifiers and processors allow many different effects to be built into a single, small floor or rack mount unit. Special built-in or plug-in expression pedals can be used with these devices to allow control of multiple effects with a single expression pedal.
Using the expression pedal, the musician can not only emulate dedicated effects such as wah, but also have real time control of almost any variable such as volume, tone, echo repeats, effect speed, and so on. Some guitar expression pedals include integrated toe switches similar to those for some electric organs. The switches provide even more control, allowing the musician to turn effects on and off, and switch between different amplifiers.
Expression has been taking a detour from pedals with advances in technology making smaller and remote devices possible. Motion controlled expression devices also called inertial audio effects controllers perform the same function as an expression pedal but provided added dexterity, movement dynamics and features.
Synthesizers
Expression pedals are typically used to control a range of synthesizer functions and effects parameters in real time. The most common type of pedal contains a simple potentiometer, mechanically linked to the pedal mechanism and electrically connected to the synthesizer, most commonly with a 1/4" TRS plug. Such pedals either require a built-in analogue pedal input, or must be converted to MIDI or USB for use with software synthesizers running on computers, using one of the commercially available pedal interfaces.
References
Pipe organ components | Expression pedal | Technology | 2,271 |
13,443,170 | https://en.wikipedia.org/wiki/Chirikov%20criterion | The Chirikov criterion or Chirikov resonance-overlap criterion
was established by the Russian physicist Boris Chirikov.
Back in 1959, he published a seminal article,
where he introduced the very first physical criterion for the onset of chaotic motion in
deterministic Hamiltonian systems. He then applied such a criterion to explain
puzzling experimental results on plasma confinement in magnetic bottles
obtained by Rodionov at the Kurchatov Institute.
Description
According to this criterion a deterministic trajectory will begin to move
between two nonlinear resonances in a chaotic and unpredictable manner,
in the parameter range
Here is the perturbation parameter,
while
is the resonance-overlap parameter, given by the ratio of the
unperturbed resonance width in frequency
(often computed in the pendulum
approximation and proportional to the square-root of perturbation),
and the frequency difference
between two unperturbed resonances. Since its introduction, the Chirikov criterion has become an important analytical tool for the determination of the chaos border.
See also
Chirikov criterion at Scholarpedia
Chirikov standard map and standard map
Boris Chirikov and Boris Chirikov at Scholarpedia
References
B.V.Chirikov, "Research concerning the theory of nonlinear resonance and stochasticity", Preprint N 267, Institute of Nuclear Physics, Novosibirsk (1969), (Engl. Trans., CERN Trans. 71-40 (1971))
B.V.Chirikov, "A universal instability of many-dimensional oscillator systems", Phys. Rep. 52: 263 (1979)
Springer link
References
External links
website dedicated to Boris Chirikov
Special Volume dedicated to 70th of Boris Chirikov: Physica D 131:1-4 vii (1999) and arXiv
Chaos theory
Chaotic maps | Chirikov criterion | Mathematics | 375 |
2,903,629 | https://en.wikipedia.org/wiki/18%20Bo%C3%B6tis | 18 Boötis is a single star in the northern constellation of Boötes, located about 85 light years away from the Sun. It is visible to the naked eye as a faint, yellow-white hued star with an apparent visual magnitude of 5.41. This object is a suspected member of the Ursa Major Moving Group, based on velocity criteria. It has a magnitude 10.84 optical companion at an angular separation of along a position angle of 219°, as of 2010.
This is an F-type main-sequence star with a stellar classification of F3 V. Older surveys gave a class of F5 IV, showing the luminosity class of a subgiant star. It shows strong evidence for short-term chromospheric variability, although it is not optically variable.
18 Boötis is an estimated 1.15 billion years old and is spinning with a projected rotational velocity of 40.5 km/s. It has 1.3 times the mass of the Sun and 1.4 times the Sun's radius. The star is radiating 3.9 times the luminosity of the Sun from its photosphere at an effective temperature of 6,731 K. An infrared excess has been detected that suggests a cold debris disk is orbiting from the host star with a blackbody temperature fit of 65 K.
References
F-type main-sequence stars
Circumstellar disks
Ursa Major moving group
Boötes
Durchmusterung objects
Bootis, 18
125451
069989
5365 | 18 Boötis | Astronomy | 314 |
33,962,121 | https://en.wikipedia.org/wiki/Geostandards%20and%20Geoanalytical%20Research | Geostandards and Geoanalytical Research is a quarterly peer-reviewed scientific journal covering reference materials, analytical techniques, and data quality relevant to the chemical analysis of geological and environmental samples. The journal was established in 1977 as Geostandards Newsletter and modified its title in 2004. The editors-in-chief are Thomas C. Meisel, Jacinta Enzweiler, Mary F. Horan, Kathryn L. Linge, Christophe R. Quétel and Paul J. Sylvester. It is published by Wiley-Blackwell on behalf of the International Association of Geoanalysts. The journal is a hybrid open-access journal, publishing both subscription and open access articles.
Article types
The journal publishes original research papers that include developments in analytical techniques, studies of geological-environmental reference materials, advances in statistical analysis of geoanalytical data, as well as data compilations, contributions to the characterisation of reference materials, as well as review articles and topical commentaries. It also publishes an annual bibliographic review article of the geoanalytical literature and a biennial series of critical reviews of analytical developments.
Abstracting and indexing
The journal is abstracted and indexed in:
Academic Search
Aquatic Sciences & Fisheries Abstracts
Chemical Abstracts Service
Current Contents/Physical, Chemical & Earth Sciences
GeoRef
Science Citation Index
According to the Journal Citation Reports, the journal has a 2018 impact factor of 4.256, ranking it 11th out of 84 journals in the category "Geochemistry and Geophysics".
See also
List of chemistry journals
List of scientific journals
References
External links
of the International Association of Geoanalysts
Geochemistry journals
English-language journals
Wiley-Blackwell academic journals
Quarterly journals
Academic journals established in 1977 | Geostandards and Geoanalytical Research | Chemistry | 344 |
75,022,723 | https://en.wikipedia.org/wiki/D%20Puppis | The Bayer designations D Puppis and d Puppis are distinct.
For D Puppis:
D Puppis (HR 2691, HD 54475), a bluish star.
For d Puppis:
d1 Puppis (HR 2961, HD 61831), a blue dwarf star
d2 Puppis (HR 2963, HD 61878), a binary star
d3 Puppis (HR 2954, HD 61899), a bluish star
d4 Puppis (V468 Puppis), a variable blue giant star
Puppis, d
Puppis | D Puppis | Astronomy | 125 |
2,445,058 | https://en.wikipedia.org/wiki/Visqueen | Visqueen is a brand of polyethylene plastic sheeting (typically low-density polyethylene) produced by British Polythene Industries Limited. It is the registered trade mark of British Polythene Limited in numerous countries throughout the world. It is commonly between 4 and 10 mils (0.004 to 0.01 in./0.1 to 0.25 mm) thick and is available in clear, opaque, blue, and black.
Visqueen is used for many purposes. It is commonly used as a temporary tarpaulin, as a drop cloth when painting, to cover concrete as it sets, to line decorative ponds, and to cover the ground before applying stone or wood chips to prevent weed growth. Large (100 × 20 ft) sheets of Visqueen are used during floods to protect levees from wave wash erosion. It is often suggested for use in greenhouses. Visqueen is used as a condensation barrier inside walls when installing HVAC systems. It is also used as a ground cover in the crawl space of home foundations as a vapor barrier. The use of Visqueen underneath a basement is to prevent water infiltration from water present in the ground that would pass through the concrete or dirt floor and bring in unwanted dampness.
History
Visqueen was first produced circa 1950 by the Visking Corporation, a company founded in the 1920s by Erwin O. Freund for the purpose of making casings for meat products. Visking investigated the post-World War II emerging technology of polyethylene, and developed manufacturing techniques to make pure virgin polyethylene film. Originally spelled VisQueen, the film was an excellent moisture barrier and was marketed to many industrial, architectural, and consumer applications, such as moisture barriers, plant seedbed protection films, building fumigation barriers, drop cloths, case liners, and tarpaulins.
At a time when Visking was the largest producer of polyethylene film in the U.S., Union Carbide acquired Visking in 1956. An antitrust ruling forced the sale of the polyethylene business of Visking to Ethyl Corporation in 1963. Ethyl, known best for its tetra ethyl lead gasoline additive, renamed the division VisQueen. In 1989, Ethyl Corporation, desiring to concentrate on chemical manufacture, spun off VisQueen as a new company, Tredegar Film Products Corporation.
In popular culture
The song "Macho City" by the Steve Miller Band (1981) contains the lyric: "Politicians and lawyers/all know what it means/They'll be keeping it all legal/with political Visqueen".
Notes
External links
visqueen.com – Visqueen Website
Plastic brands
British brands
Building materials | Visqueen | Physics,Engineering | 570 |
21,189,709 | https://en.wikipedia.org/wiki/Operations%20and%20maintenance%20centre | In mobile networks, an operation and maintenance center is the central location to operate and maintain the network.
There are various types of OMCs depending on the functionality:
OMC-B (for maintaining Node B)
OMC-R (radio. for maintaining RNC)
UMTS OMC-U
GPRS OMC-G
OMC-DO
OMC-IP
Telecommunications infrastructure | Operations and maintenance centre | Technology | 78 |
1,113,067 | https://en.wikipedia.org/wiki/GNU%20Scientific%20Library | The GNU Scientific Library (or GSL) is a software library for numerical computations in applied mathematics and science. The GSL is written in C; wrappers are available for other programming languages. The GSL is part of the GNU Project and is distributed under the GNU General Public License.
Project history
The GSL project was initiated in 1996 by physicists Mark Galassi and James Theiler of Los Alamos National Laboratory. They aimed at writing a modern replacement for widely used but somewhat outdated Fortran libraries such as Netlib. They carried out the overall design and wrote early modules; with that ready they recruited other scientists to contribute.
The "overall development of the library and the design and implementation of the major modules" was carried out by Brian Gough and Gerard Jungman. Other major contributors were Jim Davies, Reid Priedhorsky, M. Booth, and F. Rossi.
Version 1.0 was released in 2001. In the following years, the library expanded only slowly; as the documentation stated, the maintainers were more interested in stability than in additional functionality. Major version 1 ended with release 1.16 of July 2013; this was the only public activity in the three years 2012–2014.
Vigorous development resumed with publication of version 2.0 in October 2015, which included user contributed patches. The latest version 2.8 was released in May 2024.
Example
The following example program calculates the value of the Bessel function of the first kind and order zero for 5:
#include <stdio.h>
#include <gsl/gsl_sf_bessel.h>
int main(void)
{
double x = 5.0;
double y = gsl_sf_bessel_J0(x);
printf("J0(%g) = %.18e\n", x, y);
return 0;
}
The example program has to be linked to the GSL library
upon compilation:
$ gcc $(gsl-config --cflags) example.c $(gsl-config --libs)
The output is shown below and should be correct to double-precision accuracy:
J0(5) = -1.775967713143382920e-01
Features
The software library provides facilities for:
Programming-language bindings
Since the GSL is written in C, it is straightforward to provide wrappers for other programming languages. Such wrappers currently exist for
AMPL
C++
Fortran
Haskell
Java
Julia
Common Lisp
OCaml
Octave
Perl Data Language
Python
R
Ruby
Rust
C++ support
The GSL can be used in C++ classes, but not using pointers to member functions, because the type of pointer to member function is different from pointer to function. Instead, pointers to static functions have to be used. Another common workaround is using a functor.
C++ wrappers for GSL are available. Not all of these are regularly maintained. They do offer access to matrix and vector classes without having to use GSL's interface to malloc and free functions. Some also offer support for also creating workspaces that behave like Smart pointer classes. Finally, there is (limited, as of April 2020) support for allowing the user to create classes to represent a parameterised function as a functor.
While not strictly wrappers, there are some C++ classes that allow C++ users to use the Gnu Scientific Library with wrapper features.
See also
List of numerical-analysis software
List of numerical libraries
Netlib
Numerical Recipes
Notes
References
External links
GSL Design Document
The gsl package for R (programming language), an R wrapper for the special functions and quasi random number generators.
FLOSS FOR SCIENCE interview with Mark Galassi on the history of GSL.
C (programming language) libraries
Free computer libraries
Free software programmed in C
Scientific Library
Mathematical libraries
Numerical libraries
Numerical software
Articles with example C code
Software using the GNU General Public License | GNU Scientific Library | Mathematics | 819 |
47,954,822 | https://en.wikipedia.org/wiki/SPEEDAC | SPEEDAC, the SPErry Electronic Digital Automatic Computer, was an early digital computer built by Sperry Corporation in 1953.
It used 800 vacuum tubes and had magnetic drum storage of 4096 18-bit words.
References
Vacuum tube computers
1953 in computing
1950s computers | SPEEDAC | Technology | 54 |
58,726,923 | https://en.wikipedia.org/wiki/Alison%20Butler | Alison Butler is a Distinguished Professor in the Department of Chemistry and Biochemistry at the University of California, Santa Barbara. She works on bioinorganic chemistry and metallobiochemistry. She is a Fellow of the American Association for the Advancement of Science (1997), the American Chemical Society (2012), the American Academy of Arts and Sciences (2019), and the Royal Society of Chemistry (2019). She was elected a member of the National Academy of Sciences in 2022.
Education
Butler studied at Reed College, graduating in 1977. She started in immunology, but moved into chemistry to work with transition metals. She worked with Professor Tom Dunne on An intramolecular electron transfer study: the reduction of pyrazinepentaaminecobalt (III) by chromium (II). She earned her PhD at University of California, San Diego in 1982 under Robert G. Linck and Teddy G. Traylor.
Career
Butler worked as a postdoctoral fellow at University of California, Los Angeles with Joan S. Valentine and at California Institute of Technology with Harry B. Gray. She was appointed to the faculty at University of California, Santa Barbara in 1986. Here she was awarded an American Cancer Society Junior Faculty Research Award. She was awarded the 34th University of California, Santa Barbara Harold J Plous Award.
She looks to discover new siderophores, small molecules that bind iron in microorganisms. She uses genomics and bioinformatics to predict new siderophore structures. She explores how siderophores adhere to mica and look at how they can promote surface colonisation. She identified that siderophores become sticky when wet, which may help to develop underwater adhesives. Her current research considers the uptake of microbial iron, vanadium haloperoxidases in microbial quorum sensing and cryptic halogenation, bio-inspired wet adhesion using catechol compounds, and the oxidative disassembly of lignin. Her research into the bioinorganic chemistry of iron is funded by the National Institutes of Health and National Science Foundation. She studies how transition metal ions are used by marine organisms.
In 2012, she became the President of the Society for Biological Inorganic Chemistry, and served until 2014. She was made a Fellow of the American Chemical Society in July 2012. She delivered the 2016 Douglas Eveleigh Endowed Lecture at the Waksman Institute of Microbiology. In 2018, she was awarded the American Chemical Society Alfred Bader Award for her work on siderophores.
In 2019, she was elected to the American Academy of Arts and Sciences, received the American Chemical Society's Arthur C. Cope Scholar award for excellence in organic chemistry, and received the Royal Society of Chemistry's Inorganic Mechanisms Award. Butler also received the 2019-2020 Faculty Research Lecturer Award, the highest honor that University of California, Santa Barbara faculty can bestow on their members.
References
Year of birth missing (living people)
Living people
American women chemists
Reed College alumni
University of California, San Diego alumni
University of California, Santa Barbara faculty
Inorganic chemists
Fellows of the American Academy of Arts and Sciences
Fellows of the American Association for the Advancement of Science
Fellows of the American Chemical Society
21st-century American women
Members of the United States National Academy of Sciences | Alison Butler | Chemistry | 674 |
50,654,602 | https://en.wikipedia.org/wiki/Paula%20J.%20Olsiewski | Paula J. Olsiewski is an American biochemist who is a Contributing Scholar at the Johns Hopkins Center for Health Security. She was a Program Director at the Alfred P. Sloan Foundation, where she created and directed the Foundation's programs in the Microbiology of the Built Environment, the Chemistry of Indoor Environments and Civic Initiatives. She directed the Biosecurity program until its conclusion in 2011 and the Synthetic Biology program until its conclusion in 2014.
Education
Olsiewski earned a bachelor's degree in chemistry, cum laude, from Yale College, and a Doctor of Philosophy in biological chemistry from the Massachusetts Institute of Technology (1979) with a thesis on D-amino acid dehydrogenase evolution, supervised by Christopher T. Walsh. From 1980 to 1982 she was a Postdoctoral Fellow in the lab of William H. Beers at New York University.
Biotech and biomedical commercial development
Olsiewski directed commercial development for in vitro diagnostic products at Enzo Biochem, (NYSE:ENZ), a biotechnology company focused on the manipulation and modification of nucleic acids to produce therapeutic and diagnostic products. She directed the New York City Biotechnology Initiative, a state funded program to improve the region's ability to grow biotechnology companies by fostering relationships between industry and academia. She also established and directed the Technology Development Office at the Hospital for Special Surgery.
Board and advisory committee roles
Olsiewski is chair of the Board of Scientific Counselors Homeland Security Research Subcommittee at the U.S. Environmental Protection Agency, and on the board of directors at the Critical Path Institute. In 2001 she served on the Board of Advisors for the WMD Center's Bio-Response Report Card. From 2003 to 2009 she was a member of the MIT Corporation. She was the first alumna to serve as President of the MIT Alumni Association (2003-2004), and served on the advisory board of the MIT Initiative on Faculty Race and Diversity (2008-2009). She was a member of the Committee on Advances in Technology and the Prevention of Their Application to Next Generation Biowarfare Threats, which produced the National Research Counsel Report “Globalization, Biosecurity, and the Future of Life Sciences” (2006). From 2005 to 2012 she served on the advisory board for the National Consortium for the Study of Terrorism and Responses to Terrorism (START).
Selected writings & publications
Haynie, Sharon L.; Hinkle, Amber S.; Jones, Nancy L.; Martin, Cheryl A.; Olsiewski, Paula J.; Roberts, Mary F. (2011). “Reflections on the Journey: Six Short Stories.” Chemistry Central Journal, 5 (69): 1–12. doi: 10.1186/1752-153X-5-69
Her most cited papers, according to Google Scholar:
Awards and honors
In 1995, Olsiewski won the MIT Henry B. Kane '24 Award, which is given in recognition of exception service and accomplishments in the area of fundraising. In 2000, she received the MIT Bronze Beaver Alumni Award, which is given in recognition of distinguished service - it is the highest honor the Alumni Association bestows upon any of its members. Also in 2000, she received the Yale Class Distinguished Service Award, which is selected by the class leadership and bestowed to recognize and thank classmates who have dedicated time, energy and enthusiasm to the Class. In 2018, Olsiewski was elected as a AAAS Fellow in the Chemical Sciences division. In 2022, the International Society of Indoor Air Quality and Climate inducted Olsiewski as a new Academy Fellow and awarded her their Special Award ″in recognition of her advocacy and support of basic research
for the microbiology and chemistry of the indoor environment.″
References
Year of birth missing (living people)
Living people
American women biochemists
Yale College alumni
Massachusetts Institute of Technology School of Science alumni
Synthetic biologists
American women scientists
American scientists
21st-century American women | Paula J. Olsiewski | Biology | 790 |
486,551 | https://en.wikipedia.org/wiki/Language%20engineering | Language engineering involves the creation of natural language processing systems, whose cost and outputs are measurable and predictable. It is a distinct field contrasted to natural language processing and computational linguistics. A recent trend of language engineering is the use of Semantic Web technologies for the creation, archiving, processing, and retrieval of machine processable language data.
References
Natural language processing | Language engineering | Technology | 72 |
35,850,573 | https://en.wikipedia.org/wiki/Diphenyl%20sulfone | Diphenyl sulfone is an organosulfur compound with the formula (C6H5)2SO2. It is a white solid that is soluble in organic solvents. It is used as a high temperature solvent. Such high temperature solvents are useful for processing highly rigid polymers, e.g., PEEK, which only dissolve in very hot solvents.
It is produced by the sulfonation of benzene with sulfuric acid and oleum. For typical processes, benzenesulfonic acid is an intermediate. It is also produced from benzenesulfonyl chloride and benzene.
References
Benzosulfones
Phenyl compounds | Diphenyl sulfone | Chemistry | 135 |
26,855,026 | https://en.wikipedia.org/wiki/Laterite | Laterite is a soil type rich in iron and aluminium and is commonly considered to have formed in hot and wet tropical areas. Nearly all laterites are of rusty-red coloration, because of high iron oxide content. They develop by intensive and prolonged weathering of the underlying parent rock, usually when there are conditions of high temperatures and heavy rainfall with alternate wet and dry periods. The process of formation is called laterization. Tropical weathering is a prolonged process of chemical weathering which produces a wide variety in the thickness, grade, chemistry and ore mineralogy of the resulting soils. The majority of the land area containing laterites is between the tropics of Cancer and Capricorn.
Laterite has commonly been referred to as a soil type as well as being a rock type. This, and further variation in the modes of conceptualizing about laterite (e.g. also as a complete weathering profile or theory about weathering), has led to calls for the term to be abandoned altogether. At least a few researchers, including T. R. Paton and M. A. J. Williams, specializing in regolith development have considered that hopeless confusion has evolved around the name. Material that looks highly similar to the Indian laterite occurs abundantly worldwide.
Historically, laterite was cut into brick-like shapes and used in monument-building. After 1000 CE, construction at Angkor Wat and other southeast Asian sites changed to rectangular temple enclosures made of laterite, brick, and stone. Since the mid-1970s, some trial sections of bituminous-surfaced, low-volume roads have used laterite in place of stone as a base course. Thick laterite layers are porous and slightly permeable, so the layers can function as aquifers in rural areas. Locally available laterites have been used in an acid solution, followed by precipitation to remove phosphorus and heavy metals at sewage-treatment facilities.
Laterites are a source of aluminum ore; the ore exists largely in clay minerals and the hydroxides, gibbsite, boehmite, and diaspore, which resembles the composition of bauxite. In Northern Ireland they once provided a major source of iron and aluminum ores. Laterite ores also were the early major source of nickel.
Definition and physical description
Francis Buchanan-Hamilton first described and named a laterite formation in southern India in 1807. He named it laterite from the Latin word later, which means a brick; this highly compacted and cemented soil can easily be cut into brick-shaped blocks for building. The word laterite has been used for variably cemented, sesquioxide-rich soil horizons. A sesquioxide is an oxide with three atoms of oxygen and two metal atoms. It has also been used for any reddish soil at or near the Earth's surface.
Laterite covers are thick in the stable areas of the Western Ethiopian Shield, on cratons of the South American Plate, and on the Australian Shield. In Madhya Pradesh, India, the laterite which caps the plateau is thick. Laterites can be either soft and easily broken into smaller pieces, or firm and physically resistant. Basement rocks are buried under the thick weathered layer and rarely exposed. Lateritic soils form the uppermost part of the laterite cover.
In some places laterites contain pisolites and ferricrete, and they may be found in elevated positions as result of relief inversion.
Cliff Ollier has criticized the usefulness of the concept given that it is used to mean different things to different authors. Reportedly some have used it for ferricrete, others for tropical red earth soil, and yet others for soil profiles made, from top to bottom, of a crust, a mottled zone and a pallid zone. He cautions strongly against the concept of "lateritic deep weathering" since "it begs so many questions".
Formation
Tropical weathering (laterization) is a prolonged process of chemical weathering which produces a wide variety in the thickness, grade, chemistry and ore mineralogy of the resulting soils. The initial products of weathering are essentially kaolinized rocks called saprolites. A period of active laterization extended from about the mid-Tertiary to the mid-Quaternary periods (35 to 1.5 million years ago). Statistical analyses show that the transition in the mean and variance levels of 18O during the middle of the Pleistocene was abrupt. It seems this abrupt change was global and mainly represents an increase in ice mass; at about the same time an abrupt decrease in sea surface temperatures occurred; these two changes indicate a sudden global cooling. The rate of laterization would have decreased with the abrupt cooling of the earth. Weathering in tropical climates continues to this day, at a reduced rate.
Laterites are formed from the leaching of parent sedimentary rocks (sandstones, clays, limestones); metamorphic rocks (schists, gneisses, migmatites); igneous rocks (granites, basalts, gabbros, peridotites); and mineralized proto-ores; which leaves the more insoluble ions, predominantly iron and aluminum. The mechanism of leaching involves acid dissolving the host mineral lattice, followed by hydrolysis and precipitation of insoluble oxides and sulfates of iron, aluminum and silica under the high temperature conditions of a humid sub-tropical monsoon climate.
An essential feature for the formation of laterite is the repetition of wet and dry seasons. Rocks are leached by percolating rain water during the wet season; the resulting solution containing the leached ions is brought to the surface by capillary action during the dry season. These ions form soluble salt compounds which dry on the surface; these salts are washed away during the next wet season. Laterite formation is favored in low topographical reliefs of gentle crests and plateaus which prevents erosion of the surface cover. The reaction zone where rocks are in contact with water—from the lowest to highest water table levels—is progressively depleted of the easily leached ions of sodium, potassium, calcium and magnesium. A solution of these ions can have the correct pH to preferentially dissolve silicon oxide rather than the aluminum oxides and iron oxides. Silcrete has been suggested to form in zones in relatively dry "precipitating zones" of laterites. To the contrary, in the wetter parts of laterites subject to leaching ferricretes have been suggested to form.
The mineralogical and chemical compositions of laterites are dependent on their parent rocks. Laterites consist mainly of quartz, zircon, and oxides of titanium, iron, tin, aluminum and manganese, which remain during the course of weathering. Quartz is the most abundant relic mineral from the parent rock.
Laterites vary significantly according to their location, climate and depth. The main host minerals for nickel and cobalt can be either iron oxides, clay minerals or manganese oxides. Iron oxides are derived from mafic igneous rocks and other iron-rich rocks; bauxites are derived from granitic igneous rock and other iron-poor rocks. Nickel laterites occur in zones of the earth which experienced prolonged tropical weathering of ultramafic rocks containing the ferro-magnesian minerals olivine, pyroxene, and amphibole.
Locations
Yves Tardy, from the French Institut National Polytechnique de Toulouse and the Centre National de la Recherche Scientifique, calculated that laterites cover about one-third of the Earth's continental land area. Lateritic soils are the subsoils of the equatorial forests, of the savannas of the humid tropical regions, and of the Sahelian steppes. They cover most of the land area between the tropics of Cancer and Capricorn; areas not covered within these latitudes include the extreme western portion of South America, the southwestern portion of Africa, the desert regions of north-central Africa, the Arabian peninsula and the interior of Australia.
Some of the oldest and most highly deformed ultramafic rocks which underwent laterization are found as petrified fossil soils in the complex Precambrian shields in Brazil and Australia. Smaller highly deformed Alpine-type intrusives have formed laterite profiles in Guatemala, Colombia, Central Europe, India and Burma. Large thrust sheets of Mesozoic island arcs and continental collision zones underwent laterization in New Caledonia, Cuba, Indonesian and the Philippines. Laterites reflect past weathering conditions; laterites which are found in present-day non-tropical areas are products of former geological epochs, when that area was near the equator. Present-day laterite occurring outside the humid tropics are considered to be indicators of climatic change, continental drift or a combination of both. In India, laterite soils occupy an area of 240,000 square kilometres.
Uses
Agriculture
Laterite soils have a high clay content, which means they have higher cation exchange capacity, low permeability, high plasticity and high water-holding capacity than sandy soils. It is because the particles are so small, the water is trapped between them. After the rain, the water moves into the soil slowly. Due to intensive leaching, laterite soils lack in fertility in comparison to other soils, however they respond readily to manuring and irrigation. Palms are less likely to suffer from drought because the rainwater is held in the soil. However, if the structure of lateritic soils becomes degraded, a hard crust can form on the surface, which hinders water infiltration, the emergence of seedlings, and leads to increased runoff. It is possible to rehabilitate such soils, using a system called the 'bio-reclamation of degraded lands'. This involves using indigenous water-harvesting methods (such as planting pits and trenches), applying animal and plant residues, and planting high-value fruit trees and indigenous vegetable crops that are tolerant of drought conditions. These soils are most suitable for plantation crops. They are good for oil palm, tea, coffee and cashew cultivation. The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) has employed this system to rehabilitate degraded laterite soils in Niger and increase smallholder farmers' incomes. In some places, these soils support grazing grounds and scrub forests.
Building blocks
When moist, laterites can easily be cut with a spade into regular-sized blocks. Laterite is mined while it is below the water table, so it is wet and soft. Upon exposure to air it gradually hardens as the moisture between the flat clay particles evaporates and the larger iron salts lock into a rigid lattice structure and become resistant to atmospheric conditions. The art of quarrying laterite material into masonry is suspected to have been introduced from the Indian subcontinent. They harden like iron when they are exposed to air.
After 1000 CE Angkorian construction changed from circular or irregular earthen walls to rectangular temple enclosures of laterite, brick and stone structures. Geographic surveys show areas which have laterite stone alignments which may be foundations of temple sites that have not survived. The Khmer people constructed the Angkor monuments—which are widely distributed in Cambodia and Thailand—between the 9th and 13th centuries. The stone materials used were sandstone and laterite; brick had been used in monuments constructed in the 9th and 10th centuries. Two types of laterite can be identified; both types consist of the minerals kaolinite, quartz, hematite and goethite. Differences in the amounts of minor elements arsenic, antimony, vanadium and strontium were measured between the two laterites.
Angkor Wat—located in present-day Cambodia—is the largest religious structure built by Suryavarman II, who ruled the Khmer Empire from 1112 to 1152. It is a World Heritage site. The sandstone used for the building of Angkor Wat is Mesozoic sandstone quarried in the Phnom Kulen Mountains, about away from the temple. The foundations and internal parts of the temple contain laterite blocks behind the sandstone surface. The masonry was laid without joint mortar.
It is used as a local building material in places such as Burkina Faso, where it is valued for being strong and for reducing heating and cooling costs.
Road building
The French surfaced roads in the Cambodia, Thailand and Vietnam area with crushed laterite, stone or gravel. Kenya, during the mid-1970s, and Malawi, during the mid-1980s, constructed trial sections of bituminous-surfaced low-volume roads using laterite in place of stone as a base course. The laterite did not conform with any accepted specifications but performed equally well when compared with adjoining sections of road using stone or other stabilized material as a base. In 1984 US$40,000 per was saved in Malawi by using laterite in this way. It is also widely used in Brazil for road building.
Water supply
Bedrock in tropical zones is often granite, gneiss, schist or sandstone; the thick laterite layer is porous and slightly permeable so the layer can function as an aquifer in rural areas. One example is the Southwestern Laterite (Cabook) Aquifer in Sri Lanka. This aquifer is on the southwest border of Sri Lanka, with the narrow Shallow Aquifers on Coastal Sands between it and the ocean. It has the considerable water-holding capacity, depending on the depth of the formation. The aquifer in this laterite recharges rapidly with the rains of April–May which follow the dry season of February–March, and continues to fill with the monsoon rains. The water table recedes slowly and is recharged several times during the rest of the year. In some high-density suburban areas the water table could recede to below ground level during a prolonged dry period of more than 65 days. The Cabook Aquifer laterites support relatively shallow aquifers that are accessible to dug wells.
Waste water treatment
In Northern Ireland, phosphorus enrichment of lakes due to agriculture is a significant problem. Locally available laterite—a low-grade bauxite rich in iron and aluminum—is used in acid solution, followed by precipitation to remove phosphorus and heavy metals at several sewage treatment facilities. Calcium-, iron- and aluminum-rich solid media are recommended for phosphorus removal. A study, using both laboratory tests and pilot-scale constructed wetlands, reports the effectiveness of granular laterite in removing phosphorus and heavy metals from landfill leachate. Initial laboratory studies show that laterite is capable of 99% removal of phosphorus from solution. A pilot-scale experimental facility containing laterite achieved 96% removal of phosphorus. This removal is greater than reported in other systems. Initial removals of aluminum and iron by pilot-scale facilities have been up to 85% and 98% respectively. Percolating columns of laterite removed enough cadmium, chromium and lead to undetectable concentrations. There is a possible application of this low-cost, low-technology, visually unobtrusive, efficient system for rural areas with dispersed point sources of pollution.
Ores
Ores are concentrated in metalliferous laterites; aluminum is found in bauxites, iron and manganese are found in iron-rich hard crusts, nickel and copper are found in disintegrated rocks, and gold is found in mottled clays.
Bauxite
Bauxite ore is the main source of aluminum. It is a variety of laterite (residual sedimentary rock), so it has no precise chemical formula. It is composed mainly of hydrated alumina minerals such as gibbsite [Al(OH)3 or Al2O3 . 3H2O)] in newer tropical deposits; in older subtropical, temperate deposits the major minerals are boehmite [γ-AlO(OH) or Al2O3.H2O] and some diaspore [α-AlO(OH) or Al2O3.H2O]. The average chemical composition of bauxite, by weight, is 45 to 60% Al2O3 and 20 to 30% Fe2O3. The remaining weight consists of silicas (quartz, chalcedony and kaolinite), carbonates (calcite, magnesite and dolomite), titanium dioxide and water. Bauxites of economical interest must be low in kaolinite. Formation of lateritic bauxites occurs worldwide in the 145- to 2-million-year-old Cretaceous and Tertiary coastal plains. The bauxites form elongate belts, sometimes hundreds of kilometers long, parallel to Lower Tertiary shorelines in India and South America; their distribution is not related to a particular mineralogical composition of the parent rock. Many high-level bauxites are formed in coastal plains which were subsequently uplifted to their present altitude.
Iron
The basaltic laterites of Northern Ireland were formed by extensive chemical weathering of basalts during a period of volcanic activity. They reach a maximum thickness of and once provided a major source of iron and aluminum ore. Percolating waters caused degradation of the parent basalt and preferential precipitation by acidic water through the lattice left the iron and aluminum ores. Primary olivine, plagioclase feldspar and augite were successively broken down and replaced by a mineral assemblage consisting of hematite, gibbsite, goethite, anatase, halloysite and kaolinite.
Nickel
Laterite ores were the major source of early nickel. Rich laterite deposits in New Caledonia were mined starting the end of the 19th century to produce white metal. The discovery of sulfide deposits of Sudbury, Ontario, Canada, during the early part of the 20th century shifted the focus to sulfides for nickel extraction. About 70% of the Earth's land-based nickel resources are contained in laterites; they currently account for about 40% of the world nickel production. In 1950 laterite-source nickel was less than 10% of total production, in 2003 it accounted for 42%, and by 2012 the share of laterite-source nickel was expected to be 51%. The four main areas in the world with the largest nickel laterite resources are New Caledonia, with 21%; Australia, with 20%; the Philippines, with 17%; and Indonesia, with 12%.
See also
Ferricrete – stony particles conglomerated into rock by oxidized iron compounds from ground water
References
Sedimentology
Weathering
Ore deposits
Aluminium minerals
Pedology
Building materials
Soil-based building materials
Regolith | Laterite | Physics,Engineering | 3,815 |
64,440,137 | https://en.wikipedia.org/wiki/S10%20ribosomal%20protein%20leader | S10 ribosomal protein leader is a ribosomal protein leader involved in the ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal protein S10. Known Examples were predicted in Clostridia or other lineages of Bacillota with bioinformatic approaches. The structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal proteins S10 (rpsJ), L3 (rplc) and L4 (rplD).
There is an uncertainty about the ligand, because of a lack of experimental investigation.
See also
Ribosomal protein leader
References
External links
Ribosomal protein leader | S10 ribosomal protein leader | Chemistry | 143 |
32,367,140 | https://en.wikipedia.org/wiki/Stone-coated%20metal%20roofing | A stone coated metal roof is a roof made from steel or some other metal; the metal is then coated with stone chips and attached to the steel with an acrylic film. The goal is a more durable roof that still retains the aesthetic advantages of a more traditional roofing material
History
Stone coated metal roofing was refined during and after World War II in the United Kingdom, when the government requested materials that would protect corrugated steel roofs from the harsh climate. A coating of bitumen and subsequent covering by sand, stone or other materials proved effective at protecting the metal roofs and serving as camouflage against potential attack.
In 1954, L.J. Fisher, an industrialist from New Zealand, secured the rights to produce stone-coated metal roofing outside Great Britain. The company he founded, AHI Roofing, operates the largest metal roofing factory in the world, and has continued to make changes to the metal roofing product.
Advantages of stone roofs
Appearance: Stone Roofs have a beautiful appearance and therefore are well recommended by architects for attractive look and great designs.
Strength: Stone Roofs are strong, durable and can hold out much more than traditional roofs.
Water Resistance: They are water resistant and can hence withstand tough weather changes.
References
Roofs | Stone-coated metal roofing | Technology,Engineering | 248 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.