id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
37,592,148
https://en.wikipedia.org/wiki/Yasutaka%20Ihara
Yasutaka Ihara (伊原 康隆, Ihara Yasutaka; born 1938, Tokyo Prefecture) is a Japanese mathematician and professor emeritus at the Research Institute for Mathematical Sciences. His work in number theory includes Ihara's lemma and the Ihara zeta function. Career Ihara received his PhD at the University of Tokyo in 1967 with thesis Hecke polynomials as congruence zeta functions in elliptic modular case. From 1965 to 1966, Ihara worked at the Institute for Advanced Study. He was a professor at the University of Tokyo and then at the Research Institute for Mathematical Science (RIMS) of the University of Kyōto. In 2002 he retired from RIMS as professor emeritus and then became a professor at Chūō University. In 1970, he was an invited speaker (with lecture Non abelian class fields over function fields in special cases) at the International Congress of Mathematicians (ICM) in Nice. In 1990, Ihara gave a plenary lecture Braids, Galois groups and some arithmetic functions at the ICM in Kyōto. His doctoral students include Kazuya Katō. Research Ihara has worked on geometric and number theoretic applications of Galois theory. In the 1960s, he introduced the eponymous Ihara zeta function. In graph theory the Ihara zeta function has an interpretation, which was conjectured by Jean-Pierre Serre and proved by Toshikazu Sunada in 1985. Sunada also proved that a regular graph is a Ramanujan graph if and only if its Ihara zeta function satisfies an analogue of the Riemann hypothesis. Selected works On Congruence Monodromy Problems, Mathematical Society of Japan Memoirs, World Scientific 2009 (based on lectures in 1968/1969) with Michael Fried (ed.): Arithmetic fundamental groups and noncommutative Algebra, American Mathematical Society, Proc. Symposium Pure Math. vol.70, 2002 as editor: Galois representations and arithmetic algebraic geometry, North Holland 1987 with Kenneth Ribet, Jean-Pierre Serre (eds.): Galois Groups over Q, Springer 1989 (Proceedings of a Workshop 1987) References External links Yasutaka Ihara's homepage at RIMS The Ihara Zeta Function and the Riemann Zeta Function by Mollie Stein, Amelia Wallace Living people 20th-century Japanese mathematicians 21st-century Japanese mathematicians 1938 births Number theorists University of Tokyo alumni Academic staff of the University of Tokyo
Yasutaka Ihara
Mathematics
489
37,589,545
https://en.wikipedia.org/wiki/Glasgow%20effect
The Glasgow effect is a contested term which refers to the lower life expectancy of residents of Glasgow compared to the rest of the United Kingdom and Europe. The phenomenon is defined as an "[e]xcess mortality in the West of Scotland (Glasgow) after controlling for deprivation." Although lower income levels are generally associated with poor health and a shorter lifespan, epidemiologists have argued that poverty alone does not appear to account for the disparity found in Glasgow. Equally deprived areas of the UK such as Liverpool and Manchester have higher life expectancies, and the wealthiest ten per cent of the Glasgow population have a lower life expectancy than the same group in other cities. One in four men in Glasgow will die before his sixty-fifth birthday. Several hypotheses have been proposed to account for the ill health, including the practice in the 1960s and 1970s of offering young, skilled workers in Glasgow social housing in new towns, leaving behind a demographically "unbalanced population". Other suggested factors have included a high prevalence of premature and low birthweight births, land contaminated by toxins, a high level of derelict land, more deindustrialisation than in comparable cities, poor social housing, religious sectarianism, lack of social mobility, vitamin D deficiency, cold winters, higher levels of poverty than the figures suggest, adverse childhood experiences and childhood stress, high levels of stress in general, and social alienation. Excess mortality and morbidity The city's mortality gap was not apparent until 1950 and seems to have widened since the 1970s. The Economist wrote in 2012: "It is as if a malign vapour rises from the Clyde at night and settles in the lungs of sleeping Glaswegians." The mortality rates are the highest in the UK and among the highest in Europe. As of 2016, life expectancy in Scotland was lower for both females and males than anywhere else in western Europe, and was not improving as quickly as in other western European countries. With a population of 1.2 million in greater Glasgow, life expectancy at birth is 71.6 years for men, nearly seven years below the national average of 78.2 years, and 78 years for women, over four years below the national average of 82.3. According to the World Health Organization in 2008, the male life expectancy at birth in the Calton area of Glasgow between 1998–2002 was 54 years. A local doctor attributed this to alcohol and drug abuse, and to a violent gang culture. According to Bruce Whyte of the Glasgow Centre for Population Health, writing in 2015, the estimate was based on deaths in 1998–2002 in an area comprising 2,500 people, and the figures may have been affected by the presence of hostels for adults with alcohol, drug and mental health problems. The 2008–2012 estimate for Calton and nearby Bridgeton together, by then more ethnically diverse and with fewer hostels, was 67.8 years for males and 76.6 years for females. Research led by David Walsh of the Glasgow Centre for Population Health in 2010 concluded that the deprivation profiles of Glasgow, Liverpool and Manchester are almost identical, but premature deaths in Glasgow are over 30 per cent higher, and all deaths around 15 per cent higher, across almost the entire population. The higher mortality is fueled by stroke, respiratory disease, cardiovascular disease and cancer, along with deaths caused by alcohol, drugs, violence and suicide. According to a 2016 study, 43 per cent of adults are classified as either disabled or chronically ill. Suicide rates are higher than they were in 1968, and the all-cause mortality rate in the 15–44 age group is 142.4 deaths per 100,000. Drug-related deaths in Scotland more than doubled between 2006 and 2016. Hypotheses The Glasgow Centre for Population Health (GCPH) was established in 2004 to study the causes of Glasgow's ill health; the centre's partners are NHS Greater Glasgow and Clyde, Glasgow City Council and the University of Glasgow. In a publication introducing the GCPH, the Chief Medical Officer for Scotland, Harry Burns, referred to research suggesting that chronically activated stress responses, especially in children, affect the structure of parts of the frontal lobes of the brain, and that these determine the physical reaction to stress, which could result in chronic ill health. The ability to attain good health, he suggested, depends in part on whether people feel in control of their lives, and whether they see their environments as threatening or supportive. A GCPH report in 2016 concluded that certain historical processes and policy decisions had left the city more vulnerable to deprivation. Factors include the "lagged effects" of overcrowding and the former practice, in the 1960s and 1970s, of offering young, skilled workers social housing in new towns outside Glasgow; this, according to a 1971 government document, threatened to leave behind an "unbalanced population with a very high proportion of the old, the very poor and the almost unemployable". Other hypotheses have included a higher prevalence of premature and low-birthweight births; land contaminated by toxins such as chromium; a high level of derelict land, leading to a "negative physical environment"; more deindustrialisation than in comparable cities; and low-quality housing estates. Social deficits and sources of social dysfunction have been suggested: religious sectarianism; a low "sense of coherence"; low social capital; lack of social mobility; and a culture of alienation and pessimism. Soft water (with lower levels of magnesium and calcium) has been mentioned as a possible factor, as have cold winters; vitamin D deficiency; higher levels of poverty than the figures suggest; and adverse childhood experiences. See also Housing in Glasgow Salutogenesis Notes References Further reading Glasgow Centre for Population Health (2016). The ‘Glasgow Effect’ and the ‘Scottish Effect’: unhelpful terms which have now lost their meaning. Craig, Carol (2010). The Tears that Made the Clyde: Well-being in Glasgow. Argyll: Argyll Publishing. Harrison, Ellie (2016). The Glasgow Effect: A tale of class, capitalism and carbon footprint. Edinburgh: Luath Press. Macdonald, Fleur (16 October 2019). "The 'Glasgow effect' implies cities make us sad. Can the city prove the opposite?". The Guardian. Death in Glasgow Epidemiology Life expectancy Poverty in Scotland Urban decay in Europe
Glasgow effect
Biology,Environmental_science
1,320
42,943,213
https://en.wikipedia.org/wiki/TAE%20Technologies
TAE Technologies, formerly Tri Alpha Energy, is an American company based in Foothill Ranch, California developing aneutronic fusion power. The company's design relies on an advanced beam-driven field-reversed configuration (FRC), which combines features from accelerator physics and other fusion concepts in a unique fashion, and is optimized for hydrogen-boron fuel, also known as proton-boron or p-11B. It regularly publishes theoretical and experimental results in academic journals with hundreds of publications and posters at scientific conferences and in a research library hosting these articles on its website. TAE has developed five generations of original fusion platforms with a sixth currently in development. It aims to manufacture a prototype commercial fusion reactor by 2030. Organization The company was founded in 1998, and is backed by private capital. It operated as a stealth company for many years, refraining from launching its website until 2015. It did not generally discuss progress nor any schedule for commercial production. However, it has registered and renewed various patents. As of 2021, TAE Technologies reportedly had more than 250 employees and had raised over US$880 million. Funding Main financing has come from Goldman Sachs and venture capitalists such as Microsoft co-founder Paul Allen's Vulcan Inc., Rockefeller's Venrock, and Richard Kramlich's New Enterprise Associates. The Government of Russia, through the joint-stock company Rusnano, invested in Tri Alpha Energy in October 2012, and Anatoly Chubais, Rusnano CEO, became a board member. Other investors include the Wellcome Trust and the Kuwait Investment Authority. As of July 2017 the company reported that it had raised more than $500 million in backing. As of 2020, it had raised over $600 million, which rose to around $880 million in 2021 and $1.2 billion as of 2022. Leadership and board of directors TAE's technology was co-founded by physicist Norman Rostoker, as a spin-out of his work at the University of California, Irvine. Steven Specker, former CEO of the Electrical Power Research Institute (EPRI), was CEO from October 2016 to July 2018. Michl Binderbauer, who earned his PhD. in plasma physics under the guidance of Rostoker at UCI, moved from CTO to CEO following Specker's retirement. Specker remains an advisor. Additional board members include Jeff Immelt, former CEO of General Electric; John J. Mack, former CEO of Morgan Stanley; and Ernest Moniz, former United States Secretary of Energy at the US Department of Energy, who joined the company's board of directors in May 2017. Collaborators Since 2014 TAE Technologies has worked with Google to develop a process to analyze the data collected on plasma behavior in fusion reactors. In 2017, using a machine learning tool developed through the partnership and based on the "Optometrist Algorithm", it found significant improvements in plasma containment and stability over the previous C-2U machine. The study's results were published in Scientific Reports. In November 2017 the company was admitted to a United States Department of Energy program, "Innovative and Novel Computational Impact on Theory and Experiment", that gave it access to the Cray XC40 supercomputer. In 2021, TAE Technologies announced a joint research project with Japan’s Institute for Fusion Science (NIFS), a three year-long study on the effects of hydrogen-boron fuel reactions in the NIFS Large Helical Device (LHD). Subsidiaries TAE Life Sciences In March 2018 TAE Technologies announced it had raised $40 million to create TAE Life Sciences, a subsidiary focused on refining boron neutron capture therapy (BNCT) for cancer treatment, with funding led by ARTIS Ventures. TAE Life Sciences also announced that it would partner with Neuboron Medtech, which would be the first to install the company's beam system. TAE Life Sciences shares common board members with TAE Technologies and is led by Bruce Bauer. TAE Power Solutions In September 2021, TAE Technologies announced the formation of a new division, Power Solutions, to commercialize the power management systems developed on the C-2W/Norman reactor for the electric vehicle, charging infrastructure, and energy storage markets, with veteran industrialist David Roberts as its CEO. Design Underlying theory In mainline fusion approaches, the energy needed to allow reactions, the Coulomb barrier, is provided by heating the fusion fuel to millions of degrees. In such fuel, the electrons disassociate from their ions, to form a gas-like mixture known as a plasma. In any gas-like mixture, the particles will be found in a wide variety of energies, according to the Maxwell–Boltzmann distribution. In these systems, fusion occurs when two of the higher-energy particles in the mix randomly collide. Keeping the fuel together long enough for this to occur is a major challenge. TAE's machines spin plasma up into a looped structure called a field-reversed configuration (FRC) which is a loop of hot, dense plasma. Material inside an FRC is self-contained by the fields the plasma creates. As the plasma current moves around the loop, it creates a magnetic field perpendicular to the direction of motion, much like current in a wire would do. This self-created field helps to hold in the plasma current and keeps the loop stable. The challenge with field-reversed configurations is that they slow down over time, wobble, and eventually collapse. The company's innovation was to continuously apply particle beams along the surface of the FRC to keep it rotating. This beam and hoop system was key to increasing the machines' longevity, stability and performance. TAE's design The TAE design forms a field-reversed configuration (FRC), a self-stabilized rotating toroid of particles similar to a smoke ring. In the TAE system, the ring is made as thin as possible, about the same aspect ratio as an opened tin can. Particle accelerators inject fuel ions tangentially to the surface of the cylinder, where they either react or are captured into the ring as additional fuel. Unlike other magnetic confinement fusion devices such as the tokamak, FRCs provide a magnetic field topology whereby the axial field inside the reactor is reversed by eddy currents in the plasma, as compared to the ambient magnetic field externally applied by solenoids. The FRC is less prone to magnetohydrodynamic and plasma instabilities than are other magnetic confinement fusion methods. The science behind the colliding beam fusion reactor is used in the company's C-2, C-2U and C-2W projects. A key concept in the TAE system is that the FRC is kept in a useful state over an extended period. To do this, the accelerators inject the fuel such that when the particles scatter within the ring they cause the fuel already there to speed up in rotation. This process would normally slowly increase the positive charge of the fuel mass, so electrons are also injected to keep the charge roughly neutralized. The FRC is held in a cylindrical, truck-sized vacuum chamber containing solenoids. It appears the FRC will then be compressed, either using adiabatic compression similar to those proposed for magnetic mirror systems in the 1950s, or by forcing two such FRCs together using a similar arrangement. The design must achieve the "hot enough/long enough" (HELE) threshold to achieve fusion. The required temperature is 3 billion degrees Celsius (~250 keV), while the required duration (achieved with C2-U) is multiple milliseconds. The 11B(p,α)αα aneutronic reaction An essential component of the design is the use of "advanced fuels", i.e. fuels with primary reactions that do not produce neutrons, such as hydrogen and boron-11. FRC fusion products are all charged particles for which highly efficient direct energy conversion is feasible. Neutron flux and associated on-site radioactivity is virtually non-existent. So unlike other nuclear fusion research involving deuterium and tritium, and unlike nuclear fission, no radioactive waste is created. The hydrogen and boron-11 fuel used in this type of reaction is also much more abundant. TAE Technologies relies on the clean 11B(p,α)αα reaction, also written 11B(p,3α), which produces three helium nuclei called α−particles (hence the name of the company) as follows: A proton (identical to the most common hydrogen nucleus) striking boron-11 creates a resonance in carbon-12, which decays by emitting one high-energy primary α−particle. This leads to the first excited state of beryllium-8, which decays into two low-energy secondary α-particles. This is the model commonly accepted in the scientific community since the published results account for a 1987 experiment. TAE claimed that the reaction products should release more energy than what is commonly envisaged. In 2010, Henry R. Weller and his team from the Triangle Universities Nuclear Laboratory (TUNL) used the high intensity γ-ray source (HIγS) at Duke University, funded by TAE and the U.S. Department of Energy, to show that the mechanism first proposed by Ernest Rutherford and Mark Oliphant in 1933, then Philip Dee and C. W. Gilbert from the Cavendish Laboratory in 1936, and the results of an experiment conducted by French researchers from IN2P3 in 1969, was correct. The model and the experiment predicted two high energy α-particles of almost equal energy. One was the primary α-particle and the other a secondary α-particle, both emitted at an angle of 155 degrees. A third secondary α-particle is also emitted, of lower energy. Inverse cyclotron converter (ICC) Direct energy conversion systems for other fusion power generators, involving collector plates and "Venetian blinds" or a long linear microwave cavity filled with a 10-Tesla magnetic field and rectennas, are not suitable for fusion with ion energies above 1 MeV. The company employed a much shorter device, an inverse cyclotron converter (ICC) that operated at 5 MHz and required a magnetic field of only 0.6 tesla. The linear motion of fusion product ions is converted to circular motion by a magnetic cusp. Energy is collected from the charged particles as they spiral past quadrupole electrodes. More classical collectors collect particles with energy less than 1 MeV. The estimation of the ratio of fusion power to radiation loss for a 100 MW FRC has been calculated for different fuels, assuming a converter efficiency of 90% for α-particles, 40% for Bremsstrahlung radiation through photoelectric effect, and 70% for the accelerators, with 10T superconducting magnetic coils: Q = 35 for deuterium and tritium Q = 3 for deuterium and helium-3 Q = 2.7 for hydrogen and boron-11 Q = 4.3 for polarized hydrogen and boron-11. The spin polarization enhances the fusion cross section by a factor of 1.6 for 11B. A further increase in Q should result from the nuclear quadrupole moment of 11B. And another increase in Q may also result from the mechanism allowing the production of a secondary high-energy α-particle. TAE Technologies plans to use the p-11B reaction in their commercial FRC for safety reasons and because the energy conversion systems are simpler and smaller: since no neutron is released, thermal conversion is unnecessary, hence no heat exchanger or steam turbine. The "truck-sized" 100 MW reactors designed in TAE presentations are based on these calculations. Progression of Machines Sewer Pipe Developed in 1998, the company’s proof-of-concept machine was created using a common sewer pipe and first demonstrated the viability of forming a field-reverse configured magnetic field. CBFR-SPS The CBFR-SPS is a 100 MW-class, magnetic field-reversed configuration, aneutronic fusion rocket concept. The reactor is fueled by an energetic-ion mixture of hydrogen and boron (p-11B). Fusion products are helium ions (α-particles) expelled axially out of the system. α-particles flowing in one direction are decelerated and their energy directly converted to power the system; and particles expelled in the opposite direction provide thrust. Since the fusion products are charged particles and does not release neutrons, the system does not require the use of a massive radiation shield. C-2 Various experiments have been conducted by TAE Technologies on the world's largest compact toroid device called "C-2". Results began to be regularly published in 2010, with papers including 60 authors. C-2 results showed peak ion temperatures of 400 Electron volts (5 million degrees Celsius), electron temperatures of 150 Electron volts, plasma densities of 1·1019 m−3 and 1·109 fusion neutrons per second for 3 milliseconds. Budker Institute The Budker Institute of Nuclear Physics, Novosibirsk, built a powerful plasma injector, shipped in late 2013 to the company's research facility. The device produces a neutral beam in the range of 5 to 20 MW, and injects energy inside the reactor to transfer it to the fusion plasma. C-2U In March 2015, the upgraded C-2U with edge-biasing beams showed a 10-fold improvement in lifetime, with FRCs heated to 10 million degrees Celsius and lasting 5 milliseconds with no sign of decay. The C-2U functions by firing two donut shaped plasmas at each other at 1 million kilometers per hour, the result is a cigar-shaped FRC as much as 3 meters long and 40 centimeters across. The plasma was controlled with magnetic fields generated by electrodes and magnets at each end of the tube. The upgraded particle beam system provided 10 megawatts of power. C-2W/Norman In 2017, TAE Technologies renamed the C-2W reactor "Norman" in honor of the company's co-founder Norman Rostoker who died in 2014. In July 2017, the company announced that the Norman reactor had achieved plasma. The Norman reactor is reportedly able to operate at temperatures between 50 million and 70 million°C. In February 2018, the company announced that after 4,000 experiments it had reached a high temperature of nearly 20 million°C. In 2018, TAE Technologies partnered with the Applied Science team at Google to develop the technology inside Norman to maximize electron temperature, aiming to demonstrate breakeven fusion. In 2021, TAE Technologies stated Norman was regularly producing a stable plasma at temperatures over 50 million degrees, meeting a key milestone for the machine and unlocking an additional $280 million in financing, bringing its total of funding raised up to $880 million. In 2023, the company published a peer-reviewed paper reporting the first measurement of p-11B fusion in magnetically confined plasma at the LHD in Japan. Copernicus The Copernicus device will operate using hydrogen and is expected to attain net energy gain around 2025. The approximate cost of the reactor is $200 million, and it is intended to reach temperatures of around 100 million°C to validate conditions needed for deuterium-tritium fusion while the company scales to p-11B fuel for its superior environmental and cost profile. TAE intends to start construction in 2022. Da Vinci The Da Vinci device is a proposed successor device to Copernicus, and a prototype for a commercially scalable reactor. It is scheduled to be developed in the second half of the 2020s and is expected to achieve 3 billion°C and produce fusion energy from the p-11B fuel cycle. See also China Fusion Engineering Test Reactor Commonwealth Fusion Systems Dense plasma focus Fusion Industry Association General Fusion Helion Energy Polywell Spherical Tokamak for Energy Production References External links Accelerator physics Fusion power companies Nuclear power companies of the United States Nuclear technology companies of the United States
TAE Technologies
Physics
3,288
2,669,012
https://en.wikipedia.org/wiki/Chi%20Serpentis
Chi Serpentis (χ Ser, χ Serpentis) is a solitary star in the Serpens Caput section of the equatorial constellation Serpens. Based upon an annual parallax shift of 14.84 mas as seen from Earth, it is located around 222 light years from the Sun. The star is bright enough to be faintly visible to the naked eye, having an apparent visual magnitude of +5.30. In 1966 it was listed as a suspected spectroscopic binary, but it is believed to be single. This is a chemically peculiar star Ap star with a stellar classification of , indicating the spectrum shows abnormal excesses of manganese and europium. The star has 2.11 times the mass of the Sun and about 1.9 times the Sun's radius. It is radiating 26 times the solar luminosity from its photosphere at an effective temperature of 9,557 K. At the age of 212 million years, it is spinning with a rotation period of 1.6 days. Chi Serpentis is classified as an Alpha2 Canum Venaticorum type variable star, and its magnitude varies by 0.03 with a period of 1.5948 days. The pattern of variation in the spectrum suggest there are regions of enhanced strontium, chromium, iron, titanium, and magnesium on the surface of the star. The averaged quadratic field strength of the surface magnetic field is . References A-type main-sequence stars Alpha2 Canum Venaticorum variables Serpens Serpentis, Chi Serpentis, 20 140160 076866 5843 Durchmusterung objects
Chi Serpentis
Astronomy
336
30,549,030
https://en.wikipedia.org/wiki/Performance%20rating%20%28work%20measurement%29
Performance rating is the step in the work measurement in which the analyst observes the worker's performance and records a value representing that performance relative to the analyst's concept of standard performance. Performance rating helps people do their jobs better, identifies training and education needs, assigns people to work they can excel in, and maintains fairness in salaries, benefits, promotion, hiring, and firing. Most workers want to know how they are doing on the job. Workers need performance feedback to work effectively. Accessing an employee timely, accurate, constructive feedback is key to effective performance. Motivational strategies such as goal setting depend upon regular performance updates. While there are many sources of error with performance ratings, error can be reduced through rater training and through the use of behaviorally anchored rating scales. In industrial and organizational psychology such scales are used to clearly define the behaviors that constitute poor, average, and superior performance. There are several methods of performance rating. The simplest and most common method is based on speed or pace. Dexterity and effectiveness are also important considerations when assessing performance. Standard performance is denoted as 100. A performance rating greater than 100 means the worker's performance is more than standard, and less than 100 means the worker's performance is less than standard. It is important to note that standard performance is not necessarily the performance level expected of workers, the term standard can be misleading. For example, a standard performance rating of a worker walking is 4.5 miles/hour. The ratings is used in conjunction with a timing study to level out actual time (observed time) taken by the worker under observation. This leads to a basic minute value (observed time/100*rating). This balances out fast and slow workers to get to a standard/average time. Standard at a 100 is not a percentage, it simply makes the calculations easier. Most companies that set targets using work study methods will set it at a level of around 85, not 100. Attributions to work performance Performance rating has become a continuous process by which an employer and employees attempt to understand company goals and how his or her progress toward contributing to them are measured. Performance measurement is an ongoing activity for all managers and their subordinates. A performance measurement uses the following indicators: Quantity: addresses how much work is produced. A quantity measure can be expressed as an error rate, such as number one percentage of errors allowable per unit of work, or as a general result to be achieved. Quality: address how well the work is performed and/or how accurate or how effective the final product is. Timeliness: addresses how quickly, when or by what date the work is produced. The most common error made in setting timeliness standards is to allow no margin for error. As with other standards, timeliness standards should be set realistically in view of other performance requirements and needs of the organization. Cost-effectiveness: addresses dollar savings to the organization or working within a budget. Standards that address cost-effectiveness should be based on specific resource levels (money, personnel, or time) that generally can be documented and measured in agencies' annual fiscal year budgets. Cost-effectiveness standards may include such aspects of performance as maintaining or reducing unit costs, reducing the time it takes to produce a product or service, or reducing waste. Absenteeism/tardiness: addresses the ability for employee to show up at work and on time. How it is affecting their work performance and other employees. Adherence to policy: addresses deviation from policy and performance goals. Professional appearance: addresses how well employees conduct themselves in the work place and comply with dress code/working environment. Effectiveness of performance rating The purpose of performance rating is to provide systematic evaluation of the employees’ contribution to the organization. Globally, the combination of indicators and performance management, combined with intensifying work, transforms the work of employees and of the managers. On the managerial level, the will of hierarchy to fulfill performance indicators is dependent on task prioritizing, which is not shared amongst everyone. Performance Rating intensifies the environment of the organization but provides structure for production. Performance satisfaction is found to be directly related to both affective commitment and intention of employee. If motivated more likely to meet goals. See also Performance appraisal References Industrial engineering
Performance rating (work measurement)
Engineering
861
56,579,386
https://en.wikipedia.org/wiki/1-Diazidocarbamoyl-5-azidotetrazole
1-Diazidocarbamoyl-5-azidotetrazole, often jokingly referred to as azidoazide azide, is a heterocyclic inorganic compound with the formula C2N14. It is a highly reactive and extremely sensitive explosive. Synthesis 1-Diazidocarbamoyl-5-azidotetrazole was produced by diazotizing triaminoguanidinium chloride with sodium nitrite in ultra-purified water. Another synthesis uses a metathesis reaction between isocyanogen tetrabromide in acetone and aqueous sodium azide. This first forms isocyanogen tetraazide, the "open" isomer of C2N14, which at room temperature quickly undergoes an irreversible cyclization reaction to form a tetrazole ring. Properties The C2N14 molecule is a monocyclic tetrazole with three azide groups. This ring form is in equilibrium with isocyanogen tetraazide, an isomeric acyclic structure that has long been known to cyclize quickly to the tetrazole. It is one of a family of high energy nitrogen compounds in which the nitrogen atoms do not have strong triple bonds. This instability makes many such compounds liable to explosive decomposition, releasing nitrogen gas. This tetrazole explosive has a decomposition temperature of 124 °C. It is very sensitive, with an impact sensitivity lower than 0.25 joules. It is, however, less sensitive than nitrogen triiodide. Decomposition can be initiated by only using contact or using a laser beam. For these reasons, it is often erroneously claimed to be the world's most sensitive compound. See also References External links Tetrazoles Organoazides Explosive chemicals Inorganic carbon compounds Azines (hydrazine derivatives)
1-Diazidocarbamoyl-5-azidotetrazole
Chemistry
379
311,507
https://en.wikipedia.org/wiki/Opodeldoc
Opodeldoc is a medical plaster or liniment invented, or at least named, by the German Renaissance physician Paracelsus in the 1500s. In modern form opodeldoc is a mixture of soap in alcohol, to which camphor and sometimes a number of herbal essences, most notably wormwood, are added. Origins In his Bertheonea Sive Chirurgia Minor published in 1603, Paracelsus mentioned "oppodeltoch" twice, but with uncertain ingredients. As to the origin of the name, Kurt Peters speculated that it was coined by Paracelsus from syllables from the words "opoponax, bdellium, and aristolochia." Opoponax is a variety of myrrh; bdellium is Commiphora wightii, which produces a similar resin; and Aristolochia is a widely distributed genus which includes A. pfeiferi, A. rugosa and A. trilobata that are used in folk medicine to cure snakebites. The name suggests that these aromatic plants may have figured in Paracelsus's recipe. In his Medicina Militaris of 1620, German military physician Raymund Minderer ("Mindererus"; 1570-1621) praised the Paracelsus compound as a plaster, good for wounds. Minderer compared it to his own variant, which set more like sealing wax. Opodeldoc and Paracelsus were acknowledged in English no later than 1646, in Sir Thomas Browne's popular and influential Pseudodoxia Epidemica. Paracelsus's recipe is completely unrelated to later preparations of the same name. By the second printing of the Edinburgh Pharmacopoeia in 1722 the name applied to a soap-based liniment. Such a liniment in patent form, sold by John Newbery's company in Great Britain "ever since A.D. 1786", was called "Dr. Steer's Opodeldoc". Produced for decades, the "Dr. Steer" preparation had been successfully imported into the U.S., and was common enough there to rank as one of the eight patent medicines to be analyzed (although not condemned) by the Philadelphia College of Pharmacy in 1824. The name Old Opodeldoc was formerly used as a standard name for a stock character who was a physician, especially when played as a comic figure. Edgar Allan Poe used "Oppodeldoc" as a pseudonym for a character in the short story "The Literary Life of Thingum Bob, Esq." Modern usage The Pharmacopoeia of the United States (U.S.P.) gives a recipe for opodeldoc that contains: Powdered soap, 60 grams; Camphor, 45 grams; Oil of rosemary, 10 milliliters; Alcohol, 700 milliliters; Water, enough to make 1000 milliliters As late as the early 1990s 'Epideldoc' (sic) was compounded on request by several pharmacists in the Northwest of England. References Ointments
Opodeldoc
Chemistry
634
29,218,859
https://en.wikipedia.org/wiki/Roboty
ROBOTY () is a differential wheeled robot with self-balancing, motion, speech and object recognition capabilities. ROBOTY is also the first autonomous robot in Yemen, all of which will be primarily controlled by voice commands. The final goal of this research project is to build a robot capable of playing chess. History and background ROBOTY was first introduced on October 21, 2010 by its inventor, Hamdi M. Sahloul, as his final year project. The seminar showed the components and capabilities of the robot. These capabilities included moving, speaking, hearing, facial recognition, and GPS navigation. Various media and newspapers covered this event, including Yemen TV Channel, Al-Motamar, 26 Sep., Almasdar Online, Al-Sahwa, 22 May, Al-Moheet, Al-Hadath, Al-Tagheer, Al-Bida Press, Shabab Al-Yemen, Yemen Sound and Nashwan News. References Robots of Yemen Technology systems Humanoid robots Unmanned ground vehicles Autonomy Command and control Geographical technology 2010 robots Differential wheeled robots
Roboty
Technology,Engineering
217
17,638,299
https://en.wikipedia.org/wiki/Polylysine
Polylysine refers to several types of lysine homopolymers, which may differ from each other in terms of stereochemistry (D/L; the L form is natural and usually assumed) and link position (α/ε). Of these types, only ε-poly-L-lysine is produced naturally. Chemical structure The precursor amino acid lysine contains two amino groups, one at the α-carbon and one at the ε-carbon. Either can be the location of polymerization, resulting in α-polylysine or ε-polylysine. Polylysine is a homopolypeptide belonging to the group of cationic polymers: at pH 7, polylysine contains a positively charged hydrophilic amino group. α-Polylysine is a synthetic polymer, which can be composed of either L-lysine or D-lysine. "L" and "D" refer to the chirality at lysine's central carbon. This results in poly-L-lysine (PLL) and poly-D-lysine (PDL) respectively. ε-Polylysine (ε-poly-L-lysine, EPL) is typically produced as a homopolypeptide of approximately 25–30 L-lysine residues. According to research, ε-polylysine is adsorbed electrostatically to the cell surface of the bacteria, followed by a stripping of the outer membrane. This eventually leads to the abnormal distribution of the cytoplasm causing damage to the bacterial cell that is produced by bacterial fermentation. ε-Poly-L-lysine is used as a natural preservative in food products. Production Production of polylysine by natural fermentation is only observed in strains of bacteria in the genus Streptomyces. Streptomyces albulus is most often used in scientific studies and is also used for the commercial production of ε-polylysine. α-Polylysine is synthetically produced by a basic polycondensation reaction. History The production of ε-polylysine by natural fermentation was first described by researchers Shoji Shima and Heiichi Sakai in 1977. Since the late 1980s, ε-polylysine has been approved by the Japanese Ministry of Health, Labour and Welfare as a preservative in food. In January 2004, ε-polylysine became generally recognized as safe (GRAS) certified in the United States. ε-Polylysine In food ε-Polylysine is used commercially as a food preservative in Japan, Korea and in imported items sold in the United States. Food products containing polylysine are mainly found in Japan. The use of polylysine is common in food applications such as boiled rice, cooked vegetables, soups, noodles and sliced fish (sushi). Literature studies have reported an antimicrobial effect of ε-polylysine against yeast, fungi, Gram-positive bacteria and Gram-negative bacteria. Polylysine has a light yellow appearance and is slightly bitter in taste whether in powder or liquid form. α-Polylysine In tissue culture α-Polylysine is commonly used to coat tissue cultureware as an attachment factor which improves cell adherence. This phenomenon is based on the interaction between the positively charged polymer and negatively charged cells or proteins. While the poly-L-lysine (PLL) precursor amino acid occurs naturally, the poly-D-lysine (PDL) precursor is an artificial product. The latter is therefore thought to be resistant to enzymatic degradation and so may prolong cell adherence. Polylysine in drug delivery Polylysine exhibits high positive charge density which allows it to form soluble complexes with negatively charged macromolecules. Polylysine homopolymers or block copolymers have been widely used for delivery of DNA and proteins. Polylysine-based nanoparticles have also been shown to passively accumulate in the injured sites of blood vessels after stroke due to incorporation into newly formed thrombus, which offers a new way to deliver therapeutic agents specifically to the sites of injury after vascular damage. Chemical modification In 2010, hydrophobically modified ε-polylysine was synthesized by reacting EPL with octenyl succinic anhydride (OSA). It was found that OSA-g-EPLs had glass transition temperatures lower than EPL. They were able to form polymer micelles in water and to lower the surface tension of water, confirming their amphiphilic properties. The antimicrobial activities of OSA-g-EPLs were also examined, and the minimum inhibitory concentrations of OSA-g-EPLs against Escherichia coli O157:H7 remained the same as that of EPL. Therefore, modified EPLs have the potential of becoming bifunctional molecules, which can be used either as surfactants or emulsifiers in the encapsulation of water-insoluble drugs or as antimicrobial agents. References Food additives Food preservatives Polymers Amino acid derivatives
Polylysine
Chemistry,Materials_science
1,098
36,211,727
https://en.wikipedia.org/wiki/Journal%20of%20Infrared%2C%20Millimeter%2C%20and%20Terahertz%20Waves
The Journal of Infrared, Millimeter, and Terahertz Waves is a monthly peer-reviewed scientific journal published by Springer Science+Business Media. The editor is Martin Koch (Philipps University of Marburg). Its publishing formats are letters and regular full papers. The journal was established in 1980 (with editor-in-chief Kenneth J. Button) as International Journal of Infrared and Millimeter Waves. The journal's first 29 volumes (1980–2008) were published under the old title; beginning with volume 30 (January 2009) the journal has been published under its current title. Scope This journal focuses on original research pertaining to the 30 Gigahertz to 30 Terahertz frequency band of the electromagnetic spectrum. Sources, detectors, and other devices that operate in this frequency range are given topical coverage. Other subjects covered by this journal are systems, spectroscopy, applications, communications, sensing, metrology, and electromagnetic wave and matter interactions. Abstracting and indexing According to the Journal Citation Reports, the journal had a 2020 impact factor of 1.768. The journal is abstracted and indexed in: References External links Electrical and electronic engineering journals Physics journals English-language journals Monthly journals Academic journals established in 1980 Springer Science+Business Media academic journals
Journal of Infrared, Millimeter, and Terahertz Waves
Engineering
256
9,013,477
https://en.wikipedia.org/wiki/Immunoreceptor%20tyrosine-based%20activation%20motif
An immunoreceptor tyrosine-based activation motif (ITAM) is a conserved sequence of four amino acids that is repeated twice in the cytoplasmic tails of non-catalytic tyrosine-phosphorylated receptors, cell-surface proteins found mainly on immune cells. Its major role is being an integral component for the initiation of a variety of signaling pathway and subsequently the activation of immune cells, although different functions have been described, for example an osteoclast maturation. Structure The motif contains a tyrosine separated from a leucine or isoleucine by any two other amino acids, giving the signature YxxL/I. Two of these signatures are typically separated by between 6 and 8 amino acids in the cytoplasmic tail of the molecule (YxxL/Ix(6-8)YxxL/I). However, in various sources, this consensus sequence differs, mainly in the number of amino acids between individual signatures. Apart from ITAMs which have the structure described above, there is also a variety of proteins containing ITAM-like motifs, which have a very similar structure and function (for example in Dectin-1 protein). Function ITAMs are important for signal transduction, mainly in immune cells. They are found in the cytoplasmic tails of non-catalytic tyrosine-phosphorylated receptors such as the CD3 and ζ-chains of the T cell receptor complex, the CD79-alpha and -beta chains of the B cell receptor complex, and certain Fc receptors. The tyrosine residues within these motifs become phosphorylated by Src family kinases following interaction of the receptor molecules with their ligands. Phosphorylated ITAMs serve as docking sites for other proteins containing a SH2 domain, usually two domains in tandem, inducing a signaling cascade mediated by Syk family kinases (which are the primary proteins that bind to phosphorylated ITAMs), namely either Syk or ZAP-70, resulting mostly in the activation of given cell. Paradoxically, in some cases, ITAMs and ITAM-like motifs do not have an activating effect, but rather an inhibitory one. Exact mechanisms of this phenomenon are as of yet not elucidated. Other non-catalytic tyrosine-phosphorylated receptors carry a conserved inhibitory motif (ITIM) that, when phosphorylated, results in the inhibition of the signaling pathway via recruitment of phosphatases, namely SHP-1, SHP-2 and SHIP1. This serves not only for inhibition and regulation of signalling pathways related to ITAM-based signalling, but also for termination of signalling. Genetic variations Rare human genetic mutations are catalogued in the human genetic variation databases which can reportedly result in creation or deletion of ITIM and ITAMs. Examples Examples shown below list both proteins that contain the ITAM themselves and proteins that use ITAM-based signalling with the help of associated proteins which contain the motif. CD3γ, CD3δ, CD3ε, TYROBP (DAP12), FcαRI, FcγRI, FcγRII, FcγRIII, Dectin-1, CLEC-1, CD28, CD72 References Cell signaling Immune system
Immunoreceptor tyrosine-based activation motif
Biology
693
50,577,513
https://en.wikipedia.org/wiki/NGC%20533
NGC 533 is an elliptical galaxy in the constellation Cetus. It was discovered on October 8, 1785, by William Herschel. It was described as "pretty bright, pretty large, round, gradually brighter middle" by John Louis Emil Dreyer, the compiler of the New General Catalogue. References Notes External links Cetus Elliptical galaxies 0533 005283 00992
NGC 533
Astronomy
79
15,557,750
https://en.wikipedia.org/wiki/Spindling
In computers spindling is the allocation of different files (e.g., the data files and index files of a database) on different hard disks. This practice usually reduces contention for read or write resources, thus increasing the system's performance. The word comes from spindle, the axis on which the hard disks spin. Computer jargon Databases
Spindling
Technology
72
75,592,945
https://en.wikipedia.org/wiki/Telecommunications%20Act%2C%202023
The Telecommunications Act, 2023 is an act of the Parliament of India to replace the Indian Telegraph Act, 1885. It aims to consolidate laws relating to development, expansion and operation of telecommunication services and networks. Background and timeline On 20 December 2023, the Telecommunications bill, 2023 was passed by Lok Sabha. On 21 December 2023, the Telecommunications bill, 2023 was passed in Rajya Sabha. The Bill replaces the Indian Telegraph Act of 1885 with a comprehensive framework for the telecom sector. The Key Provisions of the Bill are: 1. Regulation of OTT Services: The bill proposes to bring over-the-top (OTT) services under the definition of telecommunications. This would subject them to similar regulations as traditional telecom services, potentially raising concerns about privacy and freedom of expression. 2. Government powers: The bill grants the government wide-ranging powers, including the ability to: Suspend or prohibit use of telecom equipment from countries or individuals for national security reasons. Take over, manage, or suspend any or all telecommunication services or networks in the interest of national security. Waive entry fees, license fees, penalties, etc., to promote consumer interests, market competition, or national security. 3. Spectrum allocation: The bill introduces a new system for allocating spectrum for satellite broadband services. This could potentially benefit rural areas and bridge the digital divide. 4. Other provisions: The bill also includes provisions for: Promoting research and development in the telecom sector. Protecting consumer rights and ensuring data privacy. Facilitating the deployment of new technologies like 5G. Reactions Concerns have been raised about the potential for government overreach and content censorship, as the bill grants broad powers to regulate online content. The bill's provisions granting wide-ranging powers to the government, including suspension of services and equipment bans, have been criticized as giving excessive control and potentially jeopardizing fundamental rights like freedom of expression and privacy. Critics argue that the drafting and consultation process for the bill has been opaque and lacked sufficient involvement of key stakeholders, leading to concerns about its effectiveness and fairness. The bill's data localization requirements, which mandate storing user data within India, raise concerns about potential misuse and surveillance by the government or third parties. Provisions for interception and decryption of communications further add to worries about the protection of personal information and online privacy. References Government of India Parliamentary procedure Telecommunications billing systems
Telecommunications Act, 2023
Technology
481
38,754,240
https://en.wikipedia.org/wiki/Hydrothermal%20liquefaction
Hydrothermal liquefaction (HTL) is a thermal depolymerization process used to convert wet biomass, and other macromolecules, into crude-like oil under moderate temperature and high pressure. The crude-like oil has high energy density with a lower heating value of 33.8-36.9 MJ/kg and 5-20 wt% oxygen and renewable chemicals. The process has also been called hydrous pyrolysis. The reaction usually involves homogeneous and/or heterogeneous catalysts to improve the quality of products and yields. Carbon and hydrogen of an organic material, such as biomass, peat or low-ranked coals (lignite) are thermo-chemically converted into hydrophobic compounds with low viscosity and high solubility. Depending on the processing conditions, the fuel can be used as produced for heavy engines, including marine and rail or upgraded to transportation fuels, such as diesel, gasoline or jet-fuels. The process may be significant in the creation of fossil fuels. Simple heating without water, anhydrous pyrolysis has long been considered to take place naturally during the catagenesis of kerogens to fossil fuels. In recent decades it has been found that water under pressure causes more efficient breakdown of kerogens at lower temperatures than without it. The carbon isotope ratio of natural gas also suggests that hydrogen from water has been added during creation of the gas. History As early as the 1920s, the concept of using hot water and alkali catalysts to produce oil out of biomass was proposed. In 1939, U.S. patent 2,177,557, described a two-stage process in which a mixture of water, wood chips, and calcium hydroxide is heated in the first stage at temperatures in a range of , with the pressure "higher than that of saturated steam at the temperature used." This produces "oils and alcohols" which are collected. The materials are then subjected in a second stage to what is called "dry distillation", which produces "oils and ketones". Temperatures and pressures for this second stage are not disclosed. These processes were the foundation of later HTL technologies that attracted research interest especially during the 1970s oil embargo. It was around that time that a high-pressure (hydrothermal) liquefaction process was developed at the Pittsburgh Energy Research Center (PERC) and later demonstrated (at the 100 kg/h scale) at the Albany Biomass Liquefaction Experimental Facility at Albany, Oregon, US. In 1982, Shell Oil developed the HTU™ process in the Netherlands. Other organizations that have previously demonstrated HTL of biomass include Hochschule für Angewandte Wissenschaften Hamburg, Germany, SCF Technologies in Copenhagen, Denmark, EPA’s Water Engineering Research Laboratory, Cincinnati, Ohio, USA, and Changing World Technology Inc. (CWT), Philadelphia, Pennsylvania, USA. Today, technology companies such as Licella/Ignite Energy Resources (Australia), Arbios Biotech, a Licella/Canfor joint venture, Altaca Energy (Turkey), Circlia Nordic (Denmark), Steeper Energy (Denmark, Canada) continue to explore the commercialization of HTL. Construction has begun in Teesside, UK, for a catalytic hydrothermal liquefaction plant that aims to process 80,000 tonnes per year of mixed plastic waste by 2022. Chemical reactions In hydrothermal liquefaction processes, long carbon chain molecules in biomass are thermally cracked and oxygen is removed in the form of H2O (dehydration) and CO2 (decarboxylation). These reactions result in the production of high H/C ratio bio-oil. Simplified descriptions of dehydration and decarboxylation reactions can be found in the literature (e.g. Asghari and Yoshida (2006) and Snåre et al. (2007). Process Most applications of hydrothermal liquefaction operate at temperatures between 250-550 °C and high pressures of 5-25 MPa as well as catalysts for 20–60 minutes, although higher or lower temperatures can be used to optimize gas or liquid yields, respectively. At these temperatures and pressures, the water present in the biomass becomes either subcritical or supercritical, depending on the conditions, and acts as a solvent, reactant, and catalyst to facilitate the reaction of biomass to bio-oil. The exact conversion of biomass to bio-oil is dependent on several variables: Feedstock composition Temperature and heating rate Pressure Solvent Residence time Catalysts Feedstock Theoretically, any biomass can be converted into bio-oil using hydrothermal liquefaction regardless of water content, and various different biomasses have been tested, from forestry and agriculture residues, sewage sludges, food process wastes, to emerging non-food biomass such as algae. The composition of cellulose, hemicellulose, protein, and lignin in the feedstock influence the yield and quality of the oil from the process. Zhang et al., at the University of Illinois, report on a hydrous pyrolysis process in which swine manure is converted to oil by heating the swine manure and water in the presence of carbon monoxide in a closed container. For that process they report that a temperatures of at least is required to convert the swine manure to oil, and temperatures above about reduces the amount of oil produced. The Zhang et al. process produces pressures of about 7 to 18 Mpa (1000 to 2600 psi - 69 to 178 atm), with higher temperatures producing higher pressures. Zhang et al. used a retention time of 120 minutes for the reported study, but report at higher temperatures a time of less than 30 minutes results in significant production of oil. Barbero-López et al., tested in the University of Eastern Finland the use of spent mushroom substrate and tomato plant residues as feedstock for hydrothermal liquefaction. They focused in the hydrothermal liquids produced, rich in many different constituents, and found that they are potential antifungals against several fungi causing decay on wood, but their ecotoxicity was lower than that of the commercial Cu-based wood preservative. The effectiveness of the antifungal activity of the hydrothermal liquids varied mostly due to liquid concentration and strain sensitivity, while the different feedstocks did not have such a significant effect. A commercialized process using hydrous pyrolysis (see the article Thermal depolymerization) used by Changing World Technologies, Inc. (CWT) and its subsidiary Renewable Environmental Solutions, LLC (RES) to convert turkey offal. As a two-stage process, the first stage to convert the turkey offal to hydrocarbons at a temperature of and a second stage to crack the oil into light hydrocarbons at a temperature of near . Adams et al. report only that the first stage heating is "under pressure"; Lemley, in a non-technical article on the CWT process, reports that for the first stage (for conversion) a temperature of about and a pressure of about 600 psi, with a time for the conversion of "usually about 15 minutes". For the second stage (cracking), Lemley reports a temperature of about . Temperature and heating rate Temperature plays a major role in the conversion of biomass to bio-oil. The temperature of the reaction determines the depolymerization of the biomass to bio-oil, as well as the repolymerization into char. While the ideal reaction temperature is dependent on the feedstock used, temperatures above ideal lead to an increase in char formation and eventually increased gas formation, while lower than ideal temperatures reduce depolymerization and overall product yields. Similarly to temperature, the rate of heating plays a critical role in the production of the different phase streams, due to the prevalence of secondary reactions at non-optimum heating rates. Secondary reactions become dominant in heating rates that are too low, leading to the formation of char. While high heating rates are required to form liquid bio-oil, there is a threshold heating rate and temperature where liquid production is inhibited and gas production is favored in secondary reactions. Pressure Pressure (along with temperature) determines the super- or subcritical state of solvents as well as overall reaction kinetics and the energy inputs required to yield the desirable HTL products (oil, gas, chemicals, char etc.). Residence Time Hydrothermal liquefaction is a fast process, resulting in low residence times for depolymerization to occur. Typical residence times are measured in minutes (15 to 60 minutes); however, the residence time is highly dependent on the reaction conditions, including feedstock, solvent ratio and temperature. As such, optimization of the residence time is necessary to ensure a complete depolymerization without allowing further reactions to occur. Catalysts While water acts as a catalyst in the reaction, other catalysts can be added to the reaction vessel to optimize the conversion. Previously used catalysts include water-soluble inorganic compounds and salts, including KOH and Na2CO3, as well as transition metal catalysts using nickel, palladium, platinum and ruthenium supported on either carbon, silica or alumina. The addition of these catalysts can lead to an oil yield increase of 20% or greater, due to the catalysts converting the protein, cellulose, and hemicellulose into oil. This ability for catalysts to convert biomaterials other than fats and oils to bio-oil allows for a wider range of feedstock to be used. Environmental Impact Biofuels that are produced through hydrothermal liquefaction are carbon neutral, meaning that there are no net carbon emissions produced when burning the biofuel. The plant materials used to produce bio-oils use photosynthesis to grow, and as such consume carbon dioxide from the atmosphere. The burning of the biofuels produced releases carbon dioxide into the atmosphere, but is nearly completely offset by the carbon dioxide consumed from growing the plants, resulting in a release of only 15-18 g of CO2 per kWh of energy produced. This is substantially lower than the releases rate of fossil fuel technologies, which can range from releases of 955 g/kWh (coal), 813 g/kWh (oil), and 446 g/kWh (natural gas). Recently, Steeper Energy announced that the carbon intensity (CI) of its Hydrofaction™ oil is 15 CO2eq/MJ according to GHGenius model (version 4.03a), while diesel fuel is 93.55 CO2eq/MJ. Hydrothermal liquefaction is a clean process that doesn't produce harmful compounds, such as ammonia, NOx, or SOx. Instead the heteroatoms, including nitrogen, sulfur, and chlorine, are converted into harmless byproducts such as N2 and inorganic acids that can be neutralized with bases. Comparison with pyrolysis and other biomass to liquid technologies The HTL process differs from pyrolysis as it can process wet biomass and produce a bio-oil that contains approximately twice the energy density of pyrolysis oil. Pyrolysis is a related process to HTL, but biomass must be processed and dried in order to increase the yield. The presence of water in pyrolysis drastically increases the heat of vaporization of the organic material, increasing the energy required to decompose the biomass. Typical pyrolysis processes require a water content of less than 40% to suitably convert the biomass to bio-oil. This requires considerable pretreatment of wet biomass such as tropical grasses, which contain a water content as high as 80-85%, and even further treatment for aquatic species, which can contain higher than 90% water content. The HTL oil can contain up to 80% of the feedstock carbon content (single pass). HTL oil has good potential to yield bio-oil with "drop-in" properties that can be directly distributed in existing petroleum infrastructure. The energy returned on energy invested (EROEI) of these processes is uncertain and/or has not been measured. Furthermore, products of hydrous pyrolysis might not meet current fuel standards. Further processing may be required to produce fuels. See also Gasification Pyrolysis Thermal decomposition Thermal depolymerization References External links A Possible Deep-Basin High-Rank Gas Machine Via Water Organic-Matter Redox Reactions, Leigh C. Price Surreptitiously converting dead matter into oil and coal - Water, Water Everywhere, Science News, February 20, 1993, Elizabeth Pennisi Hydrogen isotope systematics of thermally generated natural gases, Chris Clayton Organic reactions Chemical processes Industrial processes Biodegradable waste management Waste treatment technology
Hydrothermal liquefaction
Chemistry,Engineering
2,654
2,076,325
https://en.wikipedia.org/wiki/Sergey%20Lebedev%20%28chemist%29
Sergei Vasilievich Lebedev (; 13 July 1874 – 2 May 1934) was a Russian/Soviet chemist and the inventor of polybutadiene synthetic rubber, the first commercially viable and mass-produced type of synthetic rubber. Biography Lebedev was born in 1874 in Lublin and went to school in Warsaw. In 1900, he graduated from St. Petersburg University and found work at the Petersburg Margarine Factory. Starting in 1902, Lebedev moved from university to university in Russia, starting at the Saint-Petersburg Institute for Railroad Engineering. In 1904, he returned to St. Petersburg University to work under Alexey Favorsky (Stalin Prize, 1941, for contributions to the manufacture of synthetic rubber). In 1905, he married his second wife, the artist Anna Ostroumova-Lebedeva. In 1915, Lebedev was appointed Professor at the Women's Pedagogical Institute in St. Petersburg. After 1916, he was a Professor of the Saint Petersburg Academy for Military Medicine. In 1925, he became the leader of the Oil Laboratory (after 1928, the Laboratory of Synthetic Resins) at St. Petersburg University. He died in Leningrad and is interred in Tikhvin Cemetery. Works Lebedev's main works are devoted to polymerisation of diene hydrocarbons. He was the first to research the polymerisation of butadiene (1910–1913). In 1910, Lebedev was the first to get synthetic rubber based on polybutadiene. His book Research in polymerisation of by-ethylene hydrocarbons (1913) became the bible for studies of synthetic rubber. After 1914, he studied polymerisation of ethylene monomers, leading to modern industrial methods for manufacturing of butyl synthetic rubber and poly-isobutylene. Between 1926 and 1928, he developed a single-stage method for manufacturing butadiene out of ethanol. In 1928, he developed an industrial method for producing synthetic rubber based on polymerisation of butadiene using metallic sodium as a catalyst. This method became the base for the Soviet industry of synthetic rubber. The Soviets lacked reliable access to natural rubber, making the manufacture of synthetic rubber important. The first three synthetic rubber plants were launched in 1932–33. For butadiene production they used grain or potato ethanol as a feedstock. It caused a number of jokes about "Russian method of making tires from potatoes". By 1940, the Soviet Union had the largest synthetic rubber industry in the world, producing more than 50,000 tons per year. During World War II, Lebedev's process of obtaining butadiene from ethyl alcohol was also used by the German rubber industry. Another important contribution of Lebedev's was the study of the kinetics of hydrogenation of ethylene hydrocarbons and the development of a number of synthetic motor oils for aircraft engines. Honors In 1931, Lebedev was awarded the Order of Lenin for his work on synthetic rubber In 1932, he became a full member of the Soviet Academy of Sciences. In 1945 the National Institute for Synthetic Rubber was named "Lebedev's Institute". References External links July 25 – Today In Science History at www.todayinsci.com Butadiene: Definition and Much More from Answers.com at www.answers.com 1874 births 1934 deaths Scientists from Lublin Soviet chemists Polymer scientists and engineers Saint Petersburg State University alumni Full Members of the USSR Academy of Sciences Academic staff of Saint Petersburg State University Burials at Tikhvin Cemetery Soviet inventors Chemists from the Russian Empire
Sergey Lebedev (chemist)
Chemistry,Materials_science
728
617,777
https://en.wikipedia.org/wiki/Electron%20deficiency
In chemistry, electron deficiency (and electron-deficient) is jargon that is used in two contexts: chemical species that violate the octet rule because they have too few valence electrons and species that happen to follow the octet rule but have electron-acceptor properties, forming donor-acceptor charge-transfer salts. Octet rule violations Traditionally, "electron-deficiency" is used as a general descriptor for boron hydrides and other molecules which do not have enough valence electrons to form localized (2-centre 2-electron) bonds joining all atoms. For example, diborane (B2H6) would require a minimum of 7 localized bonds with 14 electrons to join all 8 atoms, but there are only 12 valence electrons. A similar situation exists in trimethylaluminium. The electron deficiency in such compounds is similar to metallic bonding. Electron-acceptor molecules Alternatively, electron-deficiency describes molecules or ions that function as electron acceptors. Such electron-deficient species obey the octet rule, but they have (usually mild) oxidizing properties. 1,3,5-Trinitrobenzene and related polynitrated aromatic compounds are often described as electron-deficient. Electron deficiency can be measured by linear free-energy relationships: "a strongly negative ρ value indicates a large electron demand at the reaction center, from which it may be concluded that a highly electron-deficient center, perhaps an incipient carbocation, is involved." References Chemical bonding
Electron deficiency
Physics,Chemistry,Materials_science
315
53,304,994
https://en.wikipedia.org/wiki/Danoprevir
Danoprevir (INN) is an orally available 15-membered macrocyclic peptidomimetic inhibitor of NS3/4A HCV protease. It contains acylsulfonamide, fluoroisoindole and tert-butyl carbamate moieties. Danoprevir is a clinical candidate based on its favorable potency profile against multiple HCV genotypes 1–6 and key mutants (GT1b, IC50 = 0.2–0.4 nM; replicon GT1b, EC50 = 1.6 nM). Danoprevir has been evaluated in an open-label, single arm clinical trial in combination with ritonavir for treating COVID-19 and favourably compared to lopinavir/ritonavir in a second trial. History Danaoprevir was initially developed by Array BioPharma then licensed to Roche for further development and commercialization. In 2013, Danoprevir was licensed to Ascletis by Roche for development and production in China under the tradename Ganovo. References Further reading Anti–hepatitis C agents Antiviral drugs COVID-19 drug development Macrocycles NS3/4A protease inhibitors Carbamates Cyclopropyl compounds Organofluorides Pyrrolidines Acylsulfonamides
Danoprevir
Chemistry,Biology
281
8,539,751
https://en.wikipedia.org/wiki/Ovomucoid
Ovomucoid is a protein found in egg whites. It is a trypsin inhibitor with three protein domains of the Kazal domain family. The homologs from chickens (Gallus gallus) and especially turkeys (Meleagris gallopavo) are best characterized. It is not related to the similarly named ovomucin, another egg white protein. Chicken ovomucoid, also known as Gal d 1, is a known allergen. It is the protein most often causing egg allergy. At least four IgE epitopes have been identified. Three other egg white proteins are also identified as allergenic: ovalbumin (Gal d 2), ovotransferrin (Gal d 3) and lysozyme (Gal d 4). References Protease inhibitors Avian proteins
Ovomucoid
Chemistry
176
162,557
https://en.wikipedia.org/wiki/Collusion
Collusion is a deceitful agreement or secret cooperation between two or more parties to limit open competition by deceiving, misleading or defrauding others of their legal right. Collusion is not always considered illegal. It can be used to attain objectives forbidden by law; for example, by defrauding or gaining an unfair market advantage. It is an agreement among firms or individuals to divide a market, set prices, limit production or limit opportunities. It can involve "unions, wage fixing, kickbacks, or misrepresenting the independence of the relationship between the colluding parties". In legal terms, all acts effected by collusion are considered void. Definition In the study of economics and market competition, collusion takes place within an industry when rival companies cooperate for their mutual benefit. Conspiracy usually involves an agreement between two or more sellers to take action to suppress competition between sellers in the market. Because competition among sellers can provide consumers with low prices, conspiracy agreements increase the price consumers pay for the goods. Because of this harm to consumers, it is against antitrust laws to fix prices by agreement between producers, so participants must keep it a secret. Collusion often takes place within an oligopoly market structure, where there are few firms and agreements that have significant impacts on the entire market or industry. To differentiate from a cartel, collusive agreements between parties may not be explicit; however, the implications of cartels and collusion are the same. Under competition law, there is an important distinction between direct and covert collusion. Direct collusion generally refers to a group of companies communicating directly with each other to coordinate and monitor their actions, such as cooperating through pricing, market allocation, sales quotas, etc. On the other hand, tacit collusion is where companies coordinate and monitor their behavior without direct communication. This type of collusion is generally not considered illegal, so companies guilty of tacit conspiracy should face no penalties even though their actions would have a similar economic impact as explicit conspiracy. Collusion results from less competition through mutual understanding, where competitors can independently set prices and market share. A core principle of antitrust policy is that companies must not communicate with each other. Even if conversations between multiple companies are illegal but not enforceable, the incentives to comply with collusive agreements are the same with and without communication. It is against competition law for companies to have explicit conversations in private. If evidence of conversations is accidentally left behind, it will become the most critical and conclusive evidence in antitrust litigation. Even without communication, businesses can coordinate prices by observation, but from a legal standpoint, this tacit handling leaves no evidence. Most companies cooperate through invisible collusion, so whether companies communicate is at the core of antitrust policy. Collusion is illegal in the United States, Canada, Australia and most of the EU due to antitrust laws, but implicit collusion in the form of price leadership and tacit understandings still takes place. Tacit Collusion Covert collusion is known as tacit collusion and is considered legal. Adam Smith in The Wealth of Nations explains that since the masters (business owners) are fewer in number, it is easier to collude to serve common interests among those involved, such as maintaining low wages, whilst it is difficult for the labour to coordinate to protect their interests due to their vast numbers. Hence, business owners have a bigger advantage over the working class. Nevertheless, according to Adam Smith, the public rarely hears about coordination and collaborations that occur between business owners as it takes place in informal settings. Some forms of explicit collusion are not considered impactful enough on an individual basis to be considered illegal, such as that which occurred by the social media group WallStreetBets in the GameStop short squeeze. There are many ways that implicit collusion tends to develop: The practice of stock analyst conference calls and meetings of industry participants almost necessarily results in tremendous amounts of strategic and price transparency. This allows each firm to see how and why every other firm is pricing their products. If the practice of the industry causes more complicated pricing, which is hard for the consumer to understand (such as risk-based pricing, hidden taxes and fees in the wireless industry, negotiable pricing), this can cause competition based on price to be meaningless (because it would be too complicated to explain to the customer in a short advertisement). This causes industries to have essentially the same prices and compete on advertising and image, something theoretically as damaging to consumers as normal price fixing. Base model of (Price) Collusion For a cartel to work successfully, it must: Co-ordinate on the conspiracy agreement (bargaining, explicit or implicit communication). Monitor compliance. Punish non-compliance. Control the expansion of non-cartel supply. Avoid inspection by customers and competition authorities. Regarding stability within the cartel: Collusion on high prices means that members have an incentive to deviate. In a one-off situation, high prices are not sustainable. Requires long-term vision and repeated interactions. Companies need to choose between two approaches: Insist on collusion agreements (now) and promote cooperation (future). Turn away from the alliance (now) and face punishment (future). Two factors influence this choice: (1) deviations must be detectable (2) penalties for deviations must have a significant effect. Collusion is illegal, contracts between cartels establishing collusion are not protected by law, cannot be enforced by courts, and must have other forms of punishment Variations Suppose this market has firms. At the collusive price, the firms are symmetric, so they divide the profits equally between the whole industry, represented as . If and only if the profit of choosing to deviate is greater than that of sticking to collude, i.e. (Companies have no incentive to deviate unilaterally) Therefore, the cartel alliance will be stable when is the case, i.e. the firm has no incentive to deviate unilaterally. So as the number of firms increases, the more difficult it is for The Cartel to maintain stability. As the number of firms in the market increases, so does the factor of the minimum discount required for collusion to succeed. According to neoclassical price-determination theory and game theory, the independence of suppliers forces prices to their minimum, increasing efficiency and decreasing the price-determining ability of each firm. However if all firms collude to increase prices, loss of sales will be minimized, as consumers lack choices at lower prices and must decide between what is available. This benefits the colluding firms, as they generate more sales at the cost of efficiency to society. However, depending on the assumptions made in the theoretical model on the information available to all firms, there are some outcomes, based on Cooperative Game Theory, where collusion may have higher efficiency than if firms did not collude. One variation of this traditional theory is the theory of kinked demand. Firms face a kinked demand curve if, when one firm decreases its price, other firms are expected to follow suit to maintain sales. When one firm increases its price, its rivals are unlikely to follow, as they would lose the sales gains they would otherwise receive by holding prices at the previous level. Kinked demand potentially fosters supra-competitive prices because any one firm would receive a reduced benefit from cutting price, as opposed to the benefits accruing under neoclassical theory and certain game-theoretic models such as Bertrand competition. Collusion may also occur in auction markets, where independent firms coordinate their bids (bid rigging). Deviation Actions that generate sufficient returns in the future are important to every company, and the probability of continued interaction and the company discount factor must be high enough. The sustainability of cooperation between companies also depends on the threat of punishment, which is also a matter of credibility. Firms that deviate from cooperative pricing will use MMC in each market. MMC increases the loss of deviation, and incremental loss is more important than incremental gain when the firm's objective function is concave. Therefore, the purpose of MMC is to strengthen corporate compliance or inhibit deviant collusion. The principle of collusion: firms give up deviation gains in the short term in exchange for continued collusion in the future. Collusion occurs when companies place more emphasis on future profits Collusion is easier to sustain when the choice deviates from the maximum profit to be gained is lower (i.e. the penalty profit is lower) and the penalty is greater. Future collusive profits − future punishment profits ≥ current deviation profits − current collusive profits-collusion can sustain. Scholars in economics and management have tried to identify factors explaining why some firms are more or less likely to be involved in collusion. Some have noted the role of the regulatory environment and the existence of leniency programs. Indicators Some actions that may indicate collusion among competitors are: Charging uniform prices or setting prices that are either too high or too low without justification Paying or receiving kickbacks and agreeing to refer customers only to each other Dividing territories and horizontal territorial allocation of markets among themselves Tying agreements and anticompetitive Product bundling (although, not all product bundling is anticompetitive) Refusal to deal with certain customers or suppliers and exclusive dealing with certain customers or suppliers Selling products below cost in order to drive out competitors (also known as dumping) Restricting the distribution or supply of products along the supply chain through vertical restraints Bid rigging by fixing bids or agreeing not to bid for certain contracts Examples In the example in the picture, the dots in Pc and Q represent competitive industry prices. If firms collude, they can limit production to Q2 and raise the price to P2. Collusion usually involves some form of agreement to seek a higher price. When companies discriminate, price collusion is less likely, so the discount factor needed to ensure stability must be increased. In such price competition, competitors use delivered pricing to discriminate in space, but this does not mean that firms using delivered pricing to discriminate cannot collude. United States Market division and price-fixing among manufacturers of heavy electrical equipment in the 1960s, including General Electric. An attempt by Major League Baseball owners to restrict players' salaries in the mid-1980s. The sharing of potential contract terms by NBA free agents in an effort to help a targeted franchise circumvent the salary cap. Price fixing within food manufacturers providing cafeteria food to schools and the military in 1993. Market division and output determination of livestock feed additive, called lysine, by companies in the US, Japan and South Korea in 1996, Archer Daniels Midland being the most notable of these. Chip dumping in poker or any other card game played for money. Ben and Jerry's and Häagen-Dazs collusion of products in 2013: Ben and Jerry's makes chunkier flavors with more treats in them, while Häagen-Dazs sticks to smoother flavors. Google and Apple against employee poaching, a collusion case in 2015 wherein it was revealed that both companies agreed not to hire employees from one another in order to halt the rise in wages. Google has been hit with a series of antitrust lawsuits. In October 2020, the US Department of Justice filed a landmark lawsuit alleging that Google unlawfully boxed out competitors by reaching deals with phone makers, including Apple and Samsung, to be the default search engine on their devices. Another lawsuit filed by nearly 40 attorneys general on Dec. 17, 2020 alleges that Google’s search results favored its own services over those of more-specialized rivals, a tactic that harmed competitors. Europe The illegal collusion between the giant German automakers BMW, Daimler and Volkswagen, discovered by the European Commission in 2019, to hinder technological progress in improving the quality of vehicle emissions in order to reduce the cost of production and maximize profits. Australia Japanese shipping company Kawasaki Kisen Kaisha Ltd (K-Line) were fined $34.5 million by the Federal Court for engaging in criminal cartel conduct. The court found that K-Line participated in a cartel with other shipping companies to fix prices on the transportation of cars, trucks, and buses to Australia between 2009 and 2012. K-Line pleaded guilty in April 2018 and the fine is the largest ever imposed under the Competition and Consumer Act. The court noted that the penalty should serve as a strong warning to businesses that cartel conduct will not be tolerated and will result in serious consequences. Between 2004 and 2013, Dr Esra Ogru, the former CEO of an Australian biotech company called Phosphagenics, colluded with two colleagues by using false invoicing and credit card reimbursements to defraud her employer of more than $6.1 million. Barriers There can be significant barriers to collusion. In any given industry, these may include: The number of firms: As the number of firms in an industry increases, it is more difficult to successfully organize, collude and communicate. Cost and demand differences between firms: If costs vary significantly between firms, it may be impossible to establish a price at which to fix output. Firms generally prefer to produce at a level where marginal cost meets marginal revenue, if one firm can produce at a lower cost, it will prefer to produce more units, and would receive a larger share of profits than its partner in the agreement. Asymmetry of information: Colluding firms may not have all the correct information about all other firms, from a quantitative perspective (firms may not know all other firms' cost and demand conditions) or a qualitative perspective (moral hazard). In either situation, firms may not know each others' preferences or actions, and any discrepancy would incentive at least one actor to renege. Cheating: There is considerable incentive to cheat on collusion agreements; although lowering prices might trigger price wars, in the short term the defecting firm may gain considerably. This phenomenon is frequently referred to as "chiseling". Potential entry: New firms may enter the industry, establishing a new baseline price and eliminating collusion (though anti-dumping laws and tariffs can prevent foreign companies from entering the market). Economic recession: An increase in average total cost or a decrease in revenue provides the incentive to compete with rival firms in order to secure a larger market share and increased demand. Anti-collusion legal framework and collusive lawsuit. Many countries with anti-collusion laws outlaw side-payments, which are an indication of collusion as firms pay each other to incentivize the continuation of the collusive relationship, may see less collusion as firms will likely prefer situations where profits are distributed towards themselves rather than the combined venture. Leniency Programs: Leniency programs are policies that reduce sanctions against collusion if a participant voluntarily confesses their behavior or cooperates with the public authority’s investigation. One example of a leniency program is offering immunity to the first firm who comes clean and gives the government information about collusion. These programs are designed to destabilize collusion and increase deterrence by encouraging firms to report illegal behavior. Conditions Conducive to Collusion There are several industry traits that are thought to be conducive to collusion or empirically associated with collusion. These traits include: High market concentration: High market concentration refers to a market with few firms, which makes it easier for these firms to collude and coordinate their actions. Homogeneous products: Homogeneous products refer to products that are similar in nature, which makes it easier for firms to agree on prices and reduces the incentive for firms to compete on product differentiation Stable demand and/or excess capacity: Stable demand and capacity implies predictability and therefore demand and capacity does not fluctuate significantly, which makes it easier for firms to coordinate their actions and maintain a collusive agreement. This can also refer to a situation where firms have more production capacity than is needed to meet demand. Government Intervention Collusion often occurs within an oligopoly market structure, which is a type of market failure. Therefore, natural market forces alone may be insufficient to prevent or deter collusion, and government intervention is often necessary. Fortunately, various forms of government intervention can be taken to reduce collusion among firms and promote natural market competition. Fines and imprisonment to companies that collude and their executives who are personally liable. Detect collusion by screening markets for suspicious pricing activity and high profitability. Provide immunity (leniency) to the first company to confess and provide the government with information about the collusion. See also Conscious parallelism Corporate crime Competition law Further reading Chassang, Sylvain; Ortner, Juan (2023). "Regulating Collusion". Annual Review of Economics 15 (1) References General references Vives, X. (1999) Oligopoly pricing, MIT Press, Cambridge MA (readable; suitable for advanced undergraduates.) Tirole, J. (1988) The Theory of Industrial Organization, MIT Press, Cambridge MA (An organized introduction to industrial organization) Tirole, J. (1986), "Hierarchies and Bureaucracies", Journal of Law Economics and Organization, vol. 2, pp. 181–214. Tirole, J. (1992), "Collusion and the Theory of Organizations", Advances in Economic Theory: Proceedings of the Sixth World Congress of the Econometric Society, ed by J.-J. Laffont. Cambridge: Cambridge University Press, vol.2:151-206. Inline citations Anti-competitive practices Game theory Bidding strategy
Collusion
Mathematics
3,593
1,241,965
https://en.wikipedia.org/wiki/Drilling%20rig
A drilling rig is an integrated system that drills wells, such as oil or water wells, or holes for piling and other construction purposes, into the earth's subsurface. Drilling rigs can be massive structures housing equipment used to drill water wells, oil wells, or natural gas extraction wells, or they can be small enough to be moved manually by one person and such are called augers. Drilling rigs can sample subsurface mineral deposits, test rock, soil and groundwater physical properties, and also can be used to install sub-surface fabrications, such as underground utilities, instrumentation, tunnels or wells. Drilling rigs can be mobile equipment mounted on trucks, tracks or trailers, or more permanent land or marine-based structures (such as oil platforms, commonly called 'offshore oil rigs' even if they don't contain a drilling rig). The term "rig" therefore generally refers to the complex equipment that is used to penetrate the surface of the Earth's crust. Small to medium-sized drilling rigs are mobile, such as those used in mineral exploration drilling, blast-hole, water wells and environmental investigations. Larger rigs are capable of drilling through thousands of metres of the Earth's crust, using large "mud pumps" to circulate drilling fluid (slurry) through the bit and up the casing annulus, for cooling and removing the "cuttings" while a well is drilled. Hoists in the rig, a derrick, can lift hundreds of tons of pipe. Other equipment can force acid or sand into reservoirs to facilitate extraction of the oil or natural gas; and in remote locations there can be permanent living accommodation and catering for crews (which may be more than a hundred). Marine rigs may operate thousands of miles distant from the supply base with infrequent crew rotation or cycle. History Until internal combustion engines were developed in the late 19th century, the main method for drilling rock was muscle power of man or animal. The technique of oil drilling through percussion or rotary drilling has its origins dating back to the ancient Chinese Han dynasty in 100 BC, where percussion drilling was used to extract natural gas in the Sichuan province. Early oil and gas drilling methods were seemingly primitive as it required several technical skills. The skills involved the availability of heavy iron bits and long bamboo poles, the manufacturing of long and sturdy cables woven from bamboo fiber, and levers. Heavy iron bits were attached to long bamboo cables suspended from bamboo derricks and then were repeatedly raised and dropped into a manually dug hole by having two to six men jumping on a lever. Han dynasty oil wells made by percussion drilling was effective but only reached 10 meters deep and 100 meters by the 10th century. By the 16th century, the Chinese were exploring and drilling oil wells more than deep. Chinese well drilling technology was introduced to Europe in 1828. A modernized variant of the ancient Chinese drilling technique was used by American businessman Edwin Drake to drill Pennsylvania's first oil well in 1859 using small steam engines to power the drilling process rather than by human muscle. Cable tool drilling was developed in ancient China and was used for drilling brine wells. The salt domes also held natural gas, which some wells produced and which was used for evaporation of the brine. Drake learned of cable tool drilling from Chinese laborers in the U.S. The first primary product was kerosene for lamps and heaters. Similar developments around Baku fed the European market. In the 1970s, outside of the oil and gas industry, roller bits using mud circulation were replaced by the first pneumatic reciprocating piston Reverse Circulation (RC) drills, and became essentially obsolete for most shallow drilling, and are now only used in certain situations where rocks preclude other methods. RC drilling proved much faster and more efficient, and continues to improve with better metallurgy, deriving harder, more durable bits, and compressors delivering higher air pressures at higher volumes, enabling deeper and faster penetration. Diamond drilling has remained essentially unchanged since its inception. Petroleum drilling industry Oil and natural gas drilling rigs are used not only to identify geologic reservoirs, but also used to create holes that allow the extraction of oil or natural gas from those reservoirs. Primarily in onshore oil and gas fields once a well has been drilled, the drilling rig will be moved off of the well and a service rig (a smaller rig) that is purpose-built for completions will be moved on to the well to get the well on line. This frees up the drilling rig to drill another hole and streamlines the operation as well as allowing for specialization of certain services, i.e. completions vs. drilling. Mining drilling industry Mining drilling rigs are used for two main purposes, exploration drilling which aims to identify the location and quality of a mineral, and production drilling, used in the production-cycle for mining. Drilling rigs used for rock blasting for surface mines vary in size dependent on the size of the hole desired, and is typically classified into smaller pre-split and larger production holes. Underground mining (hard rock) uses a variety of drill rigs dependent on the desired purpose, such as production, bolting, cabling, and tunnelling. Mobile drilling rigs In early oil exploration, drilling rigs were semi-permanent in nature and the derricks were often built on site and left in place after the completion of the well. In more recent times drilling rigs are expensive custom-built machines that can be moved from well to well. Some light duty drilling rigs are like a mobile crane and are more usually used to drill water wells. Larger land rigs must be broken apart into sections and loads to move to a new place, a process which can often take weeks. Small mobile drilling rigs are also used to drill or bore piles. Rigs can range from continuous flight auger (CFA) rigs to small air powered rigs used to drill holes in quarries, etc. These rigs use the same technology and equipment as the oil drilling rigs, just on a smaller scale. The drilling mechanisms outlined below differ mechanically in terms of the machinery used, but also in terms of the method by which drill cuttings are removed from the cutting face of the drill and returned to surface. Automated drill rig An automated drill rig (ADR) is an automated full-sized walking land-based drill rig that drills long lateral sections in horizontal wells for the oil and gas industry. ADRs are agile rigs that can move from pad to pad to new well sites faster than other full-sized drilling rigs. Each rig costs about $25 million. ADR is used extensively in the Athabasca oil sands. According to the "Oil Patch Daily News", "Each rig will generate 50,000 man-hours of work during the construction phase and upon completion, each operating rig will directly and indirectly employ more than 100 workers." Compared to conventional drilling rigs", Ensign, an international oilfield services contractor based in Calgary, Alberta, that makes ADRs claims that they are "safer to operate, have "enhanced controls intelligence," "reduced environmental footprint, quick mobility and advanced communications between field and office." In June 2005 the first specifically designed slant automated drilling rig (ADR), Ensign Rig No. 118, for steam assisted gravity drainage (SAGD) applications was mobilized by Deer Creek Energy Limited, a Calgary-based oilsands company. Auger drills An auger drill is a spiral-shaped tool. Its main function is the drilling of holes in the ground and other materials - or surfaces such as ice, wood, etc. The design of an auger depends on the kind of material it's meant to drill into, hence there are different types of auger drills. Auger drills come in varying sizes and can drill holes up to a depth of 95 feet below the ground. They are known to be quite versatile, saving time and energy during construction work or even personal projects. The auger is a helical screw made of steel casing with curved flights that rotates as it's pushed into the ground by a drill head. As the auger rotates, it brings excavated material to the surface, which helps keep the borehole open and prevents it from collapsing. Augers can be mounted on trucks or other machines and come in different lengths and diameters. Auger drilling is used in many fields, including construction, environmental studies, and geotechnical investigations. It can also be used for a variety of other purposes, such as: Installing auger piles for foundation engineering Drilling holes for industrial applications like telephone poles, solar posts, and deck posts Small home projects like gardening, building fences, and planting crops Ice fishing There are different auger drilling methods, including hand auger drilling and hollow stem auger drilling. Hand auger drilling is a cost-effective method that's often used in areas with shallow soil, but it can be time-consuming and labor-intensive. Hollow stem auger drilling uses a large, hollow auger that removes soil as it drills. Auger drilling is often quieter and less vibration-prone than other drilling methods, like drive drilling, so it can also be used in urban areas. When using an auger, it's important to take safety precautions, such as wearing protective equipment like gloves, eye and ear protectors, and closed-toe boots. You should also make sure the auger and attachments are secure, engage the drill's high torque gear, and start drilling slowly. Drill buckets A drill bucket, or auger bucket, is a drilling head that accumulates spoil inside and can be lifted from the hole periodically to be emptied. This method is particularly effective for drilling through hard and compacted soils, as well as rocks, due to the bucket's cylindrical design with cutting teeth at the base, which excavates and retains soil or rock as it rotates. Drill buckets are commonly used in foundation drilling for constructing deep piles and shafts. They come in various sizes and configurations, tailored to specific ground conditions and project requirements, and can be equipped with wear-resistant components to enhance durability in abrasive environments. Additionally, modern drill buckets may include a vented bottom to release trapped air and facilitate faster spoil removal. See also References External links OSHA guide for drilling rigs American inventions Articles containing video clips Chinese inventions Petroleum engineering Petroleum geology Oilfield terminology Hole making Machining
Drilling rig
Chemistry,Engineering
2,123
10,647,062
https://en.wikipedia.org/wiki/Standard%20Terminal%20Automation%20Replacement%20System
The Standard Terminal Automation Replacement System (STARS) is an air traffic control automation system manufactured by Raytheon and is currently being used in many TRACONs around the United States by the FAA. STARS replaced the Automated Radar Terminal System (ARTS) at FAA air traffic control facilities across the US, as well as the previous automation systems employed by the DoD. The STARS system receives and processes target reports, weather, and other non-target messages from both terminal and en route digital sensors. Additionally, it automatically tracks primary and secondary surveillance targets and provides aircraft position information to the enhanced traffic management system (ETMS). Finally, it also detects unsafe proximities between tracked aircraft pairs and provides a warning if tracked aircraft are detected at a dangerously low altitude. Additional features include converging runway display aid (CRDA) which displays "ghost" targets as an aid to controllers attempting to tightly space aircraft to converging/crossing runways in the terminal environment. Features The system is currently being used at all TRACON sites throughout the US and USAF RAPCON, USN RATCF and USA ARAC terminal facilities. STARS was installed as part of the FAA's TAMR project to replace the aging/obsolete ARTS hardware and software at TRACONS. TAMR Segment 3 Phase 1 replaced the 11 largest TRACONS CARTS with STARS. The smaller ARTS IIA sites transitioned to the STARS ELITE (Enhanced Local Integrated Tower Equipment) version of software and hardware, which is similar to TAMR, but with minimum redundancy. The FAA plans to complete this process in 2019. References External links FAA STARS Raytheon STARS - DoD Raytheon STARS - FAA Raytheon STARS LITE Raytheon STARS LITE (Brochure) Avionics Air traffic control
Standard Terminal Automation Replacement System
Technology
362
62,170,594
https://en.wikipedia.org/wiki/Jenny%20Pickerill
Jenny Pickerill (born 23 November 1973) is a Professor of Environmental Geography and Head of Department at the University of Sheffield. Her work considers how people value and use the environment, the impact of social justice on environmental policy and establishing ways to change social practise. Early life and education Pickerill studied geography at the Newcastle University. She moved to Scotland for her graduate studies, where she specialised in geographic information systems at the University of Edinburgh. She returned to Newcastle for her doctoral degree, where she earned her PhD in geography in 2000. During her PhD, Pickerill worked briefly at Lancaster University where she worked on a project with Bronislaw Szerszynski. Research and career Pickerill started her independent research career at Curtin University in Perth. Here she studied the internet activism of Australian environmentalists. Pickerill was made a lecturer in human geography at the University of Leicester in 2003. She spent 2008 as a visiting fellow at the Oxford Internet Institute. She moved to the University of Sheffield in 2014. Pickerill works on environmental geography, in particular, how people use and value the environment. This aspect of her work has involved the use of social science, investigating the complicated relationships between humans and the environment. Pickerill has explored grassroots initiatives that tackle environmental challenges. She has studied how environmental activists share their understanding of the environment using technology and how they frame their message. She is also interested in environmental activists who choose to protect one aspect of the environment whilst ignoring another. Her work recognises that environmental issues often overlap with other aspects of inequality; including racism, colonialism and neo-liberalism. Often activist movements incorporate populations of a range of social categories, and Pickerill has looked at its role in the Occupy movement, anti-war movement and the environmental movement in Australia. Pickerill has studied the impact of experimental solutions on environmental challenges and role of students in redesigning their future. This has included ways to self-build safe, environmentally friendly housing. She has revealed that women are not well represented in eco-building communities. She is currently investigating the potential for eco-communities in environmentally friendly, sustainable cities. Selected publications Alongside her academic publications, Pickerill has written for The Conversation. References 1973 births Living people Environmental scientists Alumni of Newcastle University Alumni of the University of Edinburgh Academics of the University of Sheffield Academic staff of Curtin University Academics of the University of Leicester
Jenny Pickerill
Environmental_science
486
23,281,951
https://en.wikipedia.org/wiki/Camera%20auto-calibration
Camera auto-calibration is the process of determining internal camera parameters directly from multiple uncalibrated images of unstructured scenes. In contrast to classic camera calibration, auto-calibration does not require any special calibration objects in the scene. In the visual effects industry, camera auto-calibration is often part of the "Match Moving" process where a synthetic camera trajectory and intrinsic projection model are solved to reproject synthetic content into video. Camera auto-calibration is a form of sensor ego-structure discovery; the subjective effects of the sensor are separated from the objective effects of the environment leading to a reconstruction of the perceived world without the bias applied by the measurement device. This is achieved via the fundamental assumption that images are projected from a Euclidean space through a linear, 5 degree of freedom (in the simplest case), pinhole camera model with non-linear optical distortion. The linear pinhole parameters are the focal length, the aspect ratio, the skew, and the 2D principal point. With only a set of uncalibrated (or calibrated) images, a scene may be reconstructed up to a six degree of freedom euclidean transform and an isotropic scaling. A mathematical theory for general multi-view camera self-calibration was originally demonstrated in 1992 by Olivier Faugeras, QT Luong, and Stephen J. Maybank. In 3D scenes and general motions, each pair of views provides two constraints on the 5 degree-of-freedom calibration. Therefore, three views are the minimum needed for full calibration with fixed intrinsic parameters between views. Quality modern imaging sensors and optics may also provide further prior constraints on the calibration such as zero skew (orthogonal pixel grid) and unity aspect ratio (square pixels). Integrating these priors will reduce the minimal number of images needed to two. It is possible to auto-calibrate a sensor from a single image given supporting information in a structured scene. For example, calibration may be obtained if multiple sets of parallel lines or objects with a known shape (e.g. circular) are identified. Problem statement Given set of cameras and 3D points reconstructed up to projective ambiguity (using, for example, bundle adjustment method) we wish to define rectifying homography such that is a metric reconstruction. After that internal camera parameters can be easily calculated using camera matrix factorization . Solution domains Motions General Motion Purely Rotating Cameras Planar Motion Degenerate Motions Scene Geometry General Scenes with Depth Relief Planar Scenes Weak Perspective and Orthographic Imagers Calibration Priors for Real Sensors Nonlinear optical distortion Algorithms Using the Kruppa equations. Historically the first auto-calibration algorithms. It is based on the correspondence of epipolar lines tangent to the absolute conic on the plane at infinity. Using the absolute dual quadric and its projection, the dual image of the absolute conic The modulus constraint References Geometry in computer vision Stereophotogrammetry
Camera auto-calibration
Mathematics
610
32,250,099
https://en.wikipedia.org/wiki/Nanoscale%20Research%20Letters
Nanoscale Research Letters is a peer-reviewed open access scientific journal covering research in all areas of nanotechnology and published by Springer Science+Business Media. External links Springer Science+Business Media academic journals Monthly journals Academic journals established in 2006 Nanotechnology journals English-language journals Open access journals
Nanoscale Research Letters
Materials_science
61
25,814,833
https://en.wikipedia.org/wiki/Projective%20superspace
In supersymmetry, a theory of particle physics, projective superspace is one way of dealing with supersymmetric theories, i.e. with 8 real SUSY generators, in a manifestly covariant manner. See also Superspace Harmonic superspace References Supersymmetry
Projective superspace
Physics
62
1,621,854
https://en.wikipedia.org/wiki/Outflow%20boundary
An outflow boundary, also known as a gust front, is a storm-scale or mesoscale boundary separating thunderstorm-cooled air (outflow) from the surrounding air; similar in effect to a cold front, with passage marked by a wind shift and usually a drop in temperature and a related pressure jump. Outflow boundaries can persist for 24 hours or more after the thunderstorms that generated them dissipate, and can travel hundreds of kilometers from their area of origin. New thunderstorms often develop along outflow boundaries, especially near the point of intersection with another boundary (cold front, dry line, another outflow boundary, etc.). Outflow boundaries can be seen either as fine lines on weather radar imagery or else as arcs of low clouds on weather satellite imagery. From the ground, outflow boundaries can be co-located with the appearance of roll clouds and shelf clouds. Outflow boundaries create low-level wind shear which can be hazardous during aircraft takeoffs and landings. If a thunderstorm runs into an outflow boundary, the low-level wind shear from the boundary can cause thunderstorms to exhibit rotation at the base of the storm, at times causing tornadic activity. Strong versions of these features known as downbursts can be generated in environments of vertical wind shear and mid-level dry air. Microbursts have a diameter of influence less than , while macrobursts occur over a diameter greater than . Wet microbursts occur in atmospheres where the low levels are saturated, while dry microbursts occur in drier atmospheres from high-based thunderstorms. When an outflow boundary moves into a more stable low level environment, such as into a region of cooler air or over regions of cooler water temperatures out at sea, it can lead to the development of an undular bore. Definition An outflow boundary, also known as a gust front or arc cloud, is the leading edge of gusty, cooler surface winds from thunderstorm downdrafts; sometimes associated with a shelf cloud or roll cloud. A pressure jump is associated with its passage. Outflow boundaries can persist for over 24 hours and travel hundreds of kilometers (miles) from their area of origin. A wrapping gust front is a front that wraps around the mesocyclone, cutting off the inflow of warm moist air and resulting in occlusion. This is sometimes the case during the event of a collapsing storm, in which the wind literally "rips it apart". Origin A microburst is a very localized column of sinking air known as a downburst, producing damaging divergent and straight-line winds at the surface that are similar to but distinguishable from tornadoes which generally have convergent damage. The term was defined as affecting an area in diameter or less, distinguishing them as a type of downburst and apart from common wind shear which can encompass greater areas. They are normally associated with individual thunderstorms. Microburst soundings show the presence of mid-level dry air, which enhances evaporative cooling. Organized areas of thunderstorm activity reinforce pre-existing frontal zones, and can outrun cold fronts. This outrunning occurs within the westerlies in a pattern where the upper-level jet splits into two streams. The resultant mesoscale convective system (MCS) forms at the point of the upper level split in the wind pattern in the area of best low level inflow. The convection then moves east and toward the equator into the warm sector, parallel to low-level thickness lines. When the convection is strong and linear or curved, the MCS is called a squall line, with the feature placed at the leading edge of the significant wind shift and pressure rise which is normally just ahead of its radar signature. This feature is commonly depicted in the warm season across the United States on surface analyses, as they lie within sharp surface troughs. A macroburst, normally associated with squall lines, is a strong downburst larger than . A wet microburst consists of precipitation and an atmosphere saturated in the low-levels. A dry microburst emanates from high-based thunderstorms with virga falling from their base. All types are formed by precipitation-cooled air rushing to the surface. Downbursts can occur over large areas. In the extreme case, a derecho can cover a huge area more than wide and over long, lasting up to 12 hours or more, and is associated with some of the most intense straight-line winds, but the generative process is somewhat different from that of most downbursts. Appearance At ground level, shelf clouds and roll clouds can be seen at the leading edge of outflow boundaries. Through satellite imagery, an arc cloud is visible as an arc of low clouds spreading out from a thunderstorm. If the skies are cloudy behind the arc, or if the arc is moving quickly, high wind gusts are likely behind the gust front. Sometimes a gust front can be seen on weather radar, showing as a thin arc or line of weak radar echos pushing out from a collapsing storm. The thin line of weak radar echoes is known as a fine line. Occasionally, winds caused by the gust front are so high in velocity that they also show up on radar. This cool outdraft can then energize other storms which it hits by assisting in updrafts. Gust fronts colliding from two storms can even create new storms. Usually, however, no rain accompanies the shifting winds. An expansion of the rain shaft near ground level, in the general shape of a human foot, is a telltale sign of a downburst. Gustnadoes, short-lived vertical circulations near ground level, can be spawned by outflow boundaries. Effects Gust fronts create low-level wind shear which can be hazardous to planes when they takeoff or land. Flying insects are swept along by the prevailing winds. As such, fine line patterns within weather radar imagery, associated with converging winds, are dominated by insect returns. At the surface, clouds of dust can be raised by outflow boundaries. If squall lines form over arid regions, a duststorm known as a haboob can result from the high winds picking up dust in their wake from the desert floor. If outflow boundaries move into areas of the atmosphere which are stable in the low levels, such through the cold sector of extratropical cyclones or a nocturnal boundary layer, they can create a phenomenon known as an undular bore, which shows up on satellite and radar imagery as a series of transverse waves in the cloud field oriented perpendicular to the low-level winds. See also Density Derecho Gustnado Haboob Heat burst Inflow (meteorology) Lake-effect snow Mathematical singularity Sea breeze Tropical cyclogenesis Wake low Weather front Pseudo-cold front References External links Outflow boundary over south Florida MPEG, 854KB Atmospheric dynamics Wind
Outflow boundary
Chemistry
1,432
20,091,428
https://en.wikipedia.org/wiki/NJIT%20Steel%20Bridge%20Team
NJIT Steel Bridge Team is a team within the New Jersey Institute of Technology's ASCE chapter). It consists of undergraduate students who are attending at NJIT, majoring in civil engineering, and also members of ASCE. Every year, the team competes against other schools in a steel bridge competition. Every year, the team has a few fund raising events, which are very crucial for the competition because the team needs to have proper finance in order to order parts and fabricate them. Besides fund raising, sponsors from other corporates and companies are very important too. A lot of time the team members get together and have outside activities such as hiking, paint ball, go cart, or whatever anyone wants to do. The team also does many productive activities such as visiting high schools and talking about civil engineering and the steel bridge competition. There is at least a meeting every week talking about the process of the team and keeping members informed. Meetings are always held during common hours. Sometimes pizzas and drinks are served. The competition The objective of the competition is to design a light bridge yet strong and economically and assembly it fast with as few team members as possible. The competition has 3 processes: Design & testing, which students do that themselves using programs, knowledges they learned from classes, and sometimes help from professors and alumni; Fabrication, which is when students grind, weld, and fit the parts together; and finally Assembly, which is when students put the parts together to achieve the designed bridge. There are 6 categories in scoring: Display (how the bridge look), construction speed (time management), construction economy (low cost to build), lightness, stiffness (aggregate deflection), and structural efficiency (a formula used to calculate this based on weight and deflection). The competition occurs every year. It is hosted at different places every time. This year, 2024, the regional competition will be hosted at Stonybrook University, Long Island on Saturday April 13th, 2024; and the national competition will be hosted at Louisiana Tech, Ruston Louisiana. If there are 1-4 teams competing in the region, the best team will proceed to the national competition If there are 5-10 teams competing in the region, the 2 best teams will proceed to the national competition If there are more than 10 teams competing in the region, the 3 best teams will proceed to the national competition The price for winning the national competition is $2,500 Awards 2012 National Competition at Clemson, SC 15th Place Overall - Highest in History 9th Place in Construction Speed 2012 Regional Competition 1st Place Overall 1st Place Economy 1st Place Stiffness 1st Place Build Time 2010 National Competition (at Purdue University) Nineteenth Place Overall 2010 Regional Competition (Regional competition score-sheet) First Place Overall (first place all categories including Construction speed, Lightness, Stiffness, Construction economy, Structural efficiency and Display) 2009 National Competition (at University of Nevada, Las Vegas) Twenty-Fourth Place Overall 2009 Regional Competition (Regional competition score-sheet) First Place Overall First Place Efficieny First Place Stiffness 2008 National Competition (at University of Florida) Nineteenth Place Overall ($3.390 Million) Twenty-fourth Place Efficiency ($1.327 Million) Sixteenth Place Economy ($2.062 Million) Twenty-sixth Place Stiffness (0.75” Aggregate Deflection) Twenty-second Place Aesthetics Twenty-fifth Place Lightness (267.6 lbs) Nineteenth Place Construction Speed (7.85 mins including Penalties) In a field of 42 Teams 2008 Regional Competition First Place Overall Second Place Efficiency First Place Economy Second Place Lightness First Place Construction Speed Second Place Stiffness 2007 National Competition 2007 Regional Competition First Place Overall First Place Efficiency First Place Economy First Place Lightness First Place Construction Speed 2006 National Competition 2006 Regional Competition 2005 National Competition 2005 Regional Competition · Second Place Overall · First Place Efficiency · First Place Aesthetics · First Place Stiffness 2004 National Competition 2004 Regional Competition 2003 National Competition 2003 Regional Competition · First Place Overall · First Place Lightness · First Place Efficiency · First Place Economy · First Place Stiffness · First Place Construction Speed 2002 National Competition 2002 Regional Competition · First Place Overall · First Place Efficiency · First Place Stiffness 2001 National Competition 2001 Regional Competition · Second Place Overall · First Place Efficiency · First Place Aesthetics Corporate Partners SCHIAVONE - Constructors & Engineers Acrow Bridges Corporate Sponsors Weiss-aug EECRUZ Hardesty & Hanover Lapatka Associates, Inc. Bloomfield Mason Supply Chalet Construction Corp S.Seltzer Construction Corp. Previous Sponsors Acrow Bridges EECRUZ Local No. 11 Parsons Brinckerhoff ASCE North Jersey Branch CIAP of NJ The Conti Group Mueser Rutledge Consulting Engineers Braemar Homes, L.L.C. Conklin Associates Greenberg Farrow Moretrench Northeast Remsco Construction Phoenix Site Management Tishman Kelly Engineering S.Seltzer Construction Corp. Suburban Consulting Engineers INC. External links 'NJIT Steel Bridge Team' 'ASCE/AISC Student Steel Bridge Competition 2011 Competition Guide' References Engineering organizations
NJIT Steel Bridge Team
Engineering
1,027
26,408,628
https://en.wikipedia.org/wiki/Dictionary%20writing%20system
A dictionary writing system (DWS), or dictionary production/publishing system (DPS) is software for writing and producing a dictionary, glossary, vocabulary, or thesaurus. It may include an editor, a database, a web interface for collaborative work, and various management tools. External links Third international workshop on Dictionary Writing Systems (DWS 2004) Fourth international workshop on Dictionary Writing Systems (DWS 2006) Resources Butler, Lynnika and Heather van Volkinburg. 2007. Fieldworks Language Explorer (FLEx). Language documentation & conservation 1:1. Corris, Miriam, Christopher Manning, Susan Poetsch, and Jane Simpson. 2002. Dictionaries and endangered languages. In David Bradley and Maya Bradley (eds.), Language endangerment and language maintenance. London: RoutledgeCurzon: 329-347. Coward, David E. and Charles E. Grimes. 1995. Making dictionaries: a guide to lexicography and the Multi-Dictionary Formatter (Version 1.0). Waxhaw: Summer Institute of Linguistics. De Schryver, G-M and Joffe, D. 2004. ‘On How Electronic Dictionaries are Really Used’ (see elsewhere in the current Proceedings) Hosken, Martin. 2006. Lexicon Interchange Format: A description. Joffe, David and Gilles-Maurice de Schryver. 2004. TshwaneLex – A state-of-the-art dictionary compilation program. In G. Williams & S. Vessier (eds.). 2004. Proceedings of the eleventh EURALEX international congress, EURALEX 2004, Lorient, France, July 6–10, 2004: 99–104. Lorient: Faculté des Lettres et des Sciences Humaines, Université de Bretagne Sud. McNamara, M. 2003. ‘Dictionaries for all: XML to Final Product’ in Online Proceedings of XML Europe 2003 Conference & Exposition. Powering the Information Society. Language software
Dictionary writing system
Technology
412
8,561,045
https://en.wikipedia.org/wiki/Bucket-brigade%20device
A bucket brigade or bucket-brigade device (BBD) is a discrete-time analogue delay line, developed in 1969 by F. Sangster and K. Teer of the Philips Research Labs in the Netherlands. It consists of a series of capacitance sections C0 to Cn. The stored analogue signal is moved along the line of capacitors, one step at each clock cycle. The name comes from analogy with the term bucket brigade, used for a line of people passing buckets of water. In most signal processing applications, bucket brigades have been replaced by devices that use digital signal processing, manipulating samples in digital form. Bucket brigades still see use in specialty applications, such as guitar effects. A well-known integrated circuit device around 1976, the Reticon SAD-1024 implemented two 512-stage analog delay lines in a 16-pin DIP. It allowed clock frequencies ranging from 1.5 kHz to more than 1.5 MHz. The SAD-512 was a single delay line version. The Philips Semiconductors TDA1022 similarly offered a 512-stage delay line but with a clock rate range of 5–500 kHz. Other common BBD chips include the Panasonic MN3002, MN3005, MN3007, MN3204 and MN3205, with the primary differences being the available delay time. Some examples effects units utilizing Panasonic BBDs are the Boss CE-1 Chorus Ensemble and the Yamaha E1010. In 2009, the guitar effects pedal manufacturer Visual Sound recommissioned production of the Panasonic-designed MN3102 and MN3207 BBD chip. Despite being analog in their representation of individual signal voltage samples, these devices are discrete in the time domain and thus are limited by the Nyquist–Shannon sampling theorem; both the input and output signals are generally low-pass filtered. The input must be low-pass filtered to avoid aliasing effects, while the output is low-pass filtered for reconstruction. (A low-pass is used as an approximation to the Whittaker–Shannon interpolation formula.) The concept of the bucket-brigade device led to the charge-coupled device (CCD) developed by Bell Labs for use in digital cameras. The idea of using capacitors to retain a voltage state has older origins and separately led to dynamic random-access memory, where the charges are not propagated, but refreshed, in place. See also Switched capacitor References Theuwissen, A. (1995). Solid-State Imaging with Charge-Coupled Devices. Analog circuits
Bucket-brigade device
Engineering
535
8,788,855
https://en.wikipedia.org/wiki/Kharitonov%20region
A Kharitonov region is a concept in mathematics. It arises in the study of the stability of polynomials. Let be a simply-connected set in the complex plane and let be the polynomial family. is said to be a Kharitonov region if is a subset of Here, denotes the set of all vertex polynomials of complex interval polynomials and denotes the set of all vertex polynomials of real interval polynomials See also Kharitonov's theorem References Y C Soh and Y K Foo (1991), “Kharitonov Regions: It Suffices to Check a Subset of Vertex Polynomials”, IEEE Trans. on Aut. Cont., 36, 1102 – 1105. Polynomials Stability theory
Kharitonov region
Mathematics
147
2,055,363
https://en.wikipedia.org/wiki/Master%20of%20Animals
The Master of Animals, Lord of Animals, or Mistress of the Animals is a motif in ancient art showing a human between and grasping two confronted animals. The motif is very widespread in the art of the Ancient Near East and Egypt. The figure may be female or male, it may be a column or a symbol, the animals may be realistic or fantastical, and the human figure may have animal elements such as horns, an animal upper body, an animal lower body, legs, or cloven feet. Although what the motif represented to the cultures that created the works probably varies greatly, unless shown with specific divine attributes, when male the figure is typically described as a hero by interpreters. The motif is so widespread and visually effective that many depictions probably were conceived as decoration with only a vague meaning attached to them. The Master of Animals is the "favorite motif of Achaemenian official seals", but the figures in these cases should be understood as the king. The human figure may be standing, as found from the fourth millennium BC, or as kneeling on one knee found from the third millennium BC. They are usually shown looking frontally, but in Assyrian pieces typically they are shown from the side. Sometimes the animals are clearly alive, whether fairly passive and tamed, or still struggling, rampant, or attacking. In other pieces they may represent dead hunter's prey. Other associated representations show a figure controlling or "taming" a single animal, usually to the right of the figure. But the many representations of heroes or kings killing an animal are distinguished from these. Art The earliest known depiction of such a motif appears on stamp seals of the Ubaid period in Mesopotamia. The motif appears on a terracotta stamp seal from Tell Telloh, ancient Girsu, at the end of the prehistoric Ubaid period of Mesopotamia, . The motif also was given the topmost location of the famous Gebel el-Arak Knife in the Louvre, an ivory and flint knife dating from the Naqada II d period of Egyptian prehistory, which began c. 3450 BC. Here a figure in Mesopotamian dress, often interpreted to be a god, grapples with two lions. It has been connected to the famous Pashupati seal from the Indus Valley civilization (2500-1500 BC), showing a figure seated in a yoga-like posture, with a horned headress (or horns), and surrounded by animals. This in turn is related to a figure on the Gundestrup cauldron, who sits with legs part-crossed, has antlers, is surrounded by animals, and grasps a snake in one hand and a torc in the other. This famous and puzzling object probably dates to 200 BC, or possibly as late as 300 AD, and although found in Denmark, it may have been made in Thrace. A form of the motif appears on a belt buckle of the Early Middle Ages from Kanton Wallis, Switzerland, which depicts the biblical figure of Daniel between two lions. The purse-lid from the Sutton Hoo burial of about 620 AD has two plaques with a human between two wolves, and the motif is common in Anglo-Saxon art and related Early Medieval styles, where the animals generally remain aggressive. Other notable examples of the motif in Germanic art include one of the Torslunda plates, and helmets from Vendel and Valsgärde. In the art of Mesopotamia the motif appears very early, usually with a "naked hero", for example at Uruk in the Uruk period (c. 4000 to 3100 BC), but was "outmoded in Mesopotamia by the seventh century BC". In Luristan bronzes the motif is extremely common, and often highly stylized. In terms of its composition this motif compares with another very common motif in the art of the ancient Near East and Mediterranean, that of two confronted animals flanking and grazing on a Tree of Life, interpreted as representing an earth deity. Deity figures Although such figures are not all, or even usually, deities, the term may be a generic name for a number of deities from a variety of cultures with close relationships to the animal kingdom or in part animal form (in cultures where that is not the norm). These figures control animals, usually wild ones, and are responsible for their continued reproduction and availability for hunters. They sometimes also have female equivalents, the so-called Mistress of the Animals. Many Mesopotamian examples may represent Enkidu, a central figure in the Ancient Mesopotamian Epic of Gilgamesh. They all may have a Stone Age precursor who was probably a hunter's deity. Many relate to the horned deity of the hunt, another common type, typified by Cernunnos, and a variety of stag, bull, ram, and goat deities. Horned deities are not universal however, and in some cultures bear deities, such as Arktos, might take the role, or even the more anthropomorphic deities who lead the Wild Hunt. Such figures are also often referred to as 'Lord of the forest' or 'Lord of the mountain'. The Greek god shown as "Master of Animals" is usually Apollo as a hunting deity. Shiva has the epithet Pashupati meaning the "Lord of animals", and these figures may derive from an archetype. Chapter 39 of the Book of Job has been interpreted as an assertion of the deity of the Hebrew Bible as Master of Animals. See also Asherah Notes References Aruz, Joan, et al., Assyria to Iberia at the Dawn of the Classical Age, 2014, Metropolitan Museum of Art, , 9780300208085, google books Frankfort, Henri, The Art and Architecture of the Ancient Orient, Pelican History of Art, 4th ed 1970, Penguin (now Yale History of Art), Garfinkel, Alan P., Donald R. Austin, David Earle, and Harold Williams, 2009, "Myth, Ritual and Rock Art: Coso Decorated Animal-Humans and the Animal Master". Rock Art Research 26(2):179-197. Section "The Animal Master", The Journal of the Australian Rock Art Research Association (AURA) and of the International Federation of Rock Art Organizations (IFRAO)] Werness, Hope B., Continuum Encyclopedia of Animal Symbolism in World Art, 2006, A&C Black, , 9780826419132, google books Further reading Hinks, Roger (1938). The Master of Animals, Journal of the Warburg Institute, Vol. 1, No. 4 (Apr., 1938), pp. 263–265 Chittenden, Jacqueline (1947). The Master of Animals, Hesperia, Vol. 16, No. 2 (Apr. - Jun., 1947), pp. 89–114 Slotten, Ralph L. (1965). The Master of Animals: A study in the symbolism of ultimacy in primitive religion, Journal of the American Academy of Religion, 1965, XXXIII(4): 293-302 Bernhard Lang (2002). The Hebrew God: Portrait of an Ancient Deity, New Haven: Yale University Press, pp. 75–108 Yamada, Hitoshi (2013). "The "Master of Animals" Concept of the Ainu", Cosmos: The Journal of the Traditional Cosmology Society, 29: 127–140 Garfinkel, Alan P. and Steve Waller, 2012, Sounds and Symbolism from the Netherworld: Acoustic Archaeology at the Animal Master’s Portal. Pacific Coast Archaeological Society Quarterly 46(4):37-60 External links Master of the Animals at Encyclopædia Britannica 4th-millennium BC establishments Mythological archetypes Nature gods Hunting gods Dionysus Iconography Prehistoric art Ancient Near East art and architecture Animals in art Visual motifs Ubaid period Wild Hunt
Master of Animals
Mathematics
1,613
3,262,889
https://en.wikipedia.org/wiki/Omega-regular%20language
The ω-regular languages are a class of ω-languages that generalize the definition of regular languages to infinite words. Formal definition An ω-language L is ω-regular if it has the form Aω where A is a regular language not containing the empty string AB, the concatenation of a regular language A and an ω-regular language B (Note that BA is not well-defined) A ∪ B where A and B are ω-regular languages (this rule can only be applied finitely many times) The elements of Aω are obtained by concatenating words from A infinitely many times. Note that if A is regular, Aω is not necessarily ω-regular, since A could be for example {ε}, the set containing only the empty string, in which case Aω=A, which is not an ω-language and therefore not an ω-regular language. It is a straightforward consequence of the definition that the ω-regular languages are precisely the ω-languages of the form A1B1ω ∪ ... ∪ AnBnω for some n, where the Ais and Bis are regular languages and the Bis do not contain the empty string. Equivalence to Büchi automaton Theorem: An ω-language is recognized by a Büchi automaton if and only if it is an ω-regular language. Proof: Every ω-regular language is recognized by a nondeterministic Büchi automaton; the translation is constructive. Using the closure properties of Büchi automata and structural induction over the definition of ω-regular language, it can be easily shown that a Büchi automaton can be constructed for any given ω-regular language. Conversely, for a given Büchi automaton , we construct an ω-regular language and then we will show that this language is recognized by A. For an ω-word w = a1a2... let w(i,j) be the finite segment ai+1...aj-1aj of w. For every , we define a regular language Lq,q' that is accepted by the finite automaton . Lemma: We claim that the Büchi automaton A recognizes the language Proof: Let's suppose word and q0,q1,q2,... is an accepting run of A on w. Therefore, q0 is in and there must be a state in F such that occurs infinitely often in the accepting run. Let's pick the strictly increasing infinite sequence of indexes i0,i1,i2... such that, for all k≥0, is . Therefore, and, for all Therefore, Conversely, suppose for some and Therefore, there is an infinite and strictly increasing sequence i0,i1,i2... such that and, for all By definition of Lq,q', there is a finite run of from to on word w(0,i0). For all k≥0, there is a finite run of from to on word w(ik,ik+1). By this construction, there is a run of A, which starts from and in which occurs infinitely often. Hence, . Equivalence to Monadic second-order logic Büchi showed in 1962 that ω-regular languages are precisely the ones definable in a particular monadic second-order logic called S1S. Bibliography Wolfgang Thomas, "Automata on infinite objects." In Jan van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B: Formal Models and Semantics, pages 133-192. Elsevier Science Publishers, Amsterdam, 1990. Formal languages
Omega-regular language
Mathematics
748
30,784,853
https://en.wikipedia.org/wiki/NanoIntegris
NanoIntegris is a nanotechnology company based in Boisbriand, Quebec specializing in the production of enriched, single-walled carbon nanotubes. In 2012, NanoIntegris was acquired by Raymor Industries, a large-scale producer of single-wall carbon nanotubes using the plasma torch process. The proprietary technology through which NanoIntegris creates its products spun out of the Hersam Research Group at Northwestern University. Process The process through which these technologies emerged is called Density Gradient Ultracentrifugation (DGU). DGU has been used for some time in biological and medical applications but Dr. Mark Hersam utilized this process with carbon nanotubes which allowed for those nanotubes with semi-conductive properties to be separated from those with conductive properties. While the DGU method was the first one to convincingly produce high-purity semiconducting carbon nanotubes, the rotation speeds involved limit the amount of liquid, and thus nanotubes, that can be processed with this technology. NanoIntegris has recently licensed a new process using selective wrapping of semiconducting nanotubes with conjugated polymers. This method is scalable thus enabling the supply of this material in large quantities for commercial applications. Products Semiconducting SWCNT Enriched Semiconducting carbon nanotubes (sc-SWCNT) using either a density-gradient ultracentrifugation (DGU) or a polymer-wrapping (conjugated polymer extraction(CPE)) method. While the DGU method is used to disperse and enrich sc-SWCNT in an aqueous solution, the CPE method disperses and enriches sc-SWCNT in non-polar aromatic solvents Conducting SWCNT Enriched Conducting carbon nanotubes PlasmaTubes SWCNT Highly graphitized single-wall carbon nanotubes grown using an industrial-scale plasma torch. Nanotubes are grown using a plasma torch display diameters, lengths, and purity levels comparable to the arc and laser methods. The nanotubes measure between 1 and 1.5 nm in diameter and between 0.3-5 microns in length. Pure and SuperPureTubes SWCNT Highly purified carbon nanotubes. Carbon impurities and metal catalysts impurities below 3% and 1.5% respectively. PureSheets/Graphene 1-4+ layer graphene sheets obtained by liquid exfoliation of graphite HiPco SWCNT Small-diameter single-walled carbon nanotubes Applications Field-Effect Transistors Both Wang and Engel have found that NanoIntegris separated nanotubes "hold great potential for thin-film transistors and display applications" compared to standard carbon nanotubes. More recently, nanotube-based thin film transistors have been printed using inkjet or gravure methods on a variety of flexible substrates including polyimide and polyethylene (PET) and transparent substrates such as glass. These p-type thin film transistors reliably exhibit high-mobilities (> 10 cm^2/V/s) and ON/OFF ratios (> 10^3) and threshold voltages below 5 V. Nanotube-enabled thin-film transistors thus offer high mobility and current density, low power consumption as well as environmental stability and especially mechanical flexibility. Hysterisis in the current-voltage curves as well as variability in the threshold voltage are issues that remain to be solved on the way to nanotube-enabled OTFT backplanes for flexible displays. Transparent Conductors Additionally, the ability to distinguish semiconducting from conducting nanotubes was found to have an effect on conductive films. Organic Light-Emitting Diodes Organic Light-Emitting Diodes (OLEDs) can be made on a larger scale and at a lower cost using separated carbon nanotubes. High Frequency Devices By using high-purity, semiconducting nanotubes, scientists have been able to achieve "record...operating frequencies above 80 GHz." References Boisbriand Companies based in Quebec Technology companies established in 2007 Nanotechnology companies 2007 establishments in Quebec Canadian companies disestablished in 2007 2012 mergers and acquisitions
NanoIntegris
Materials_science
875
76,152,941
https://en.wikipedia.org/wiki/Alfa%20Romeo%20690T%20engine
The Alfa Romeo 690T is a twin-turbocharged, direct injected, 90° V6 petrol engine designed and produced by Alfa Romeo since 2015. It is used in the high-performance Giulia Quadrifoglio and Stelvio Quadrifoglio models and is manufactured at the Alfa Romeo Termoli engine plant. Description The 690T is often considered to be the Ferrari F154 engine with two less cylinders, but in fact it is a completely new engine developed by the same engineer, Gianluca Pivetti, of the F154, that shares some peculiarities Alfa knew worked well and to reduce development time. This 2.9-litre V6 uses single-scroll rather than twin-scroll turbos, which produce of boost pressure. Alfa also added mechanical cylinder deactivation to the right bank for increased highway fuel efficiency. The 90-degree V6 engine's crankshaft has three crankpins 120 degrees apart, each with two connecting rods mounted side by side. This configuration results in uneven firing at 90 and 150 degrees of each rotation, but for each cylinder bank results in even pulses every 240 degrees, providing evenly-spaced exhaust pulses to each turbocharger and allows one bank to deactivate. Additionally, from 2020 onward, Alfa added port injection, doubling the number of injectors to 12. The Maserati 3.0-litre V6 Nettuno engine, introduced in the Maserati MC20, shares many of its characteristics with the Ferrari F154 and the Alfa Romeo 690T engines. In 2023 Alfa Romeo presented the 33 Stradale model that featured a bigger displacement 690T engine. Now at 3.0-litres and producing . Applications Alfa Romeo Giulia Quadrifoglio Alfa Romeo Stelvio Quadrifoglio Alfa Romeo Giulia GTA and GTAm 2023 Alfa Romeo Giulia SWB Zagato Alfa Romeo References Alfa Romeo V6 engines Alfa Romeo engines Gasoline engines by model Engines by model Piston engines Internal combustion engine
Alfa Romeo 690T engine
Technology,Engineering
400
20,833,901
https://en.wikipedia.org/wiki/Soyuz-U2
The Soyuz-U2 (GRAU index 11A511U2) was a Soviet, later Russian, carrier rocket. It was derived from the Soyuz-U, and a member of the R-7 family of rockets. It featured increased performance compared with the baseline Soyuz-U, due to the use of syntin propellant, as opposed to RP-1 paraffin, used on the Soyuz-U. The increased payload of the Soyuz-U2 allowed heavier spacecraft to be launched, while lighter spacecraft could be placed in higher orbits, compared to those launched by Soyuz-U rockets. In 1996, it was announced that the Soyuz-U2 had been retired, as the performance advantage gained through the use of syntin did not justify the additional cost of its production. The final flight, Soyuz TM-22, occurred on 3 September 1995 from Gagarin's Start in Baikonur. The Soyuz-U2 was first used to launch four Zenit reconnaissance satellites, then it delivered crewed Soyuz spacecraft to space stations Salyut 7 and Mir: missions Soyuz T-12 to T-15 and Soyuz TM-1 to TM-22. It also supplied the stations with Progress cargo spacecraft: Progress 20 to Salyut 7, Progress 25 to 42 to Mir, followed by the new generation Progress M-1 to M-18 and finally M-23. Other missions included the Gamma telescope and three Orlets reconnaissance satellites. In total, Soyuz-U2 was launched 72 times and experienced no failures over its operational lifetime. See also List of R-7 launches References R-7 (rocket family) Space launch vehicles of the Soviet Union Space launch vehicles of Russia Soyuz program Vehicles introduced in 1982
Soyuz-U2
Astronomy
347
9,927
https://en.wikipedia.org/wiki/Endomembrane%20system
The endomembrane system is composed of the different membranes (endomembranes) that are suspended in the cytoplasm within a eukaryotic cell. These membranes divide the cell into functional and structural compartments, or organelles. In eukaryotes the organelles of the endomembrane system include: the nuclear membrane, the endoplasmic reticulum, the Golgi apparatus, lysosomes, vesicles, endosomes, and plasma (cell) membrane among others. The system is defined more accurately as the set of membranes that forms a single functional and developmental unit, either being connected directly, or exchanging material through vesicle transport. Importantly, the endomembrane system does not include the membranes of plastids or mitochondria, but might have evolved partially from the actions of the latter (see below). The nuclear membrane contains a lipid bilayer that encompasses the contents of the nucleus. The endoplasmic reticulum (ER) is a synthesis and transport organelle that branches into the cytoplasm in plant and animal cells. The Golgi apparatus is a series of multiple compartments where molecules are packaged for delivery to other cell components or for secretion from the cell. Vacuoles, which are found in both plant and animal cells (though much bigger in plant cells), are responsible for maintaining the shape and structure of the cell as well as storing waste products. A vesicle is a relatively small, membrane-enclosed sac that stores or transports substances. The cell membrane is a protective barrier that regulates what enters and leaves the cell. There is also an organelle known as the Spitzenkörper that is only found in fungi, and is connected with hyphal tip growth. In prokaryotes endomembranes are rare, although in many photosynthetic bacteria the plasma membrane is highly folded and most of the cell cytoplasm is filled with layers of light-gathering membrane. These light-gathering membranes may even form enclosed structures called chlorosomes in green sulfur bacteria. Another example is the complex "pepin" system of Thiomargarita species, especially T. magnifica. The organelles of the endomembrane system are related through direct contact or by the transfer of membrane segments as vesicles. Despite these relationships, the various membranes are not identical in structure and function. The thickness, molecular composition, and metabolic behavior of a membrane are not fixed, they may be modified several times during the membrane's life. One unifying characteristic the membranes share is a lipid bilayer, with proteins attached to either side or traversing them. History of the concept Most lipids are synthesized in yeast either in the endoplasmic reticulum, lipid particles, or the mitochondrion, with little or no lipid synthesis occurring in the plasma membrane or nuclear membrane. Sphingolipid biosynthesis begins in the endoplasmic reticulum, but is completed in the Golgi apparatus. The situation is similar in mammals, with the exception of the first few steps in ether lipid biosynthesis, which occur in peroxisomes. The various membranes that enclose the other subcellular organelles must therefore be constructed by transfer of lipids from these sites of synthesis. However, although it is clear that lipid transport is a central process in organelle biogenesis, the mechanisms by which lipids are transported through cells remain poorly understood. The first proposal that the membranes within cells form a single system that exchanges material between its components was by Morré and Mollenhauer in 1974. This proposal was made as a way of explaining how the various lipid membranes are assembled in the cell, with these membranes being assembled through lipid flow from the sites of lipid synthesis. The idea of lipid flow through a continuous system of membranes and vesicles was an alternative to the various membranes being independent entities that are formed from transport of free lipid components, such as fatty acids and sterols, through the cytosol. Importantly, the transport of lipids through the cytosol and lipid flow through a continuous endomembrane system are not mutually exclusive processes and both may occur in cells. Components of the system Nuclear envelope The nuclear envelope surrounds the nucleus, separating its contents from the cytoplasm. It has two membranes, each a lipid bilayer with associated proteins. The outer nuclear membrane is continuous with the rough endoplasmic reticulum membrane, and like that structure, features ribosomes attached to the surface. The outer membrane is also continuous with the inner nuclear membrane since the two layers are fused together at numerous tiny holes called nuclear pores that perforate the nuclear envelope. These pores are about 120 nm in diameter and regulate the passage of molecules between the nucleus and cytoplasm, permitting some to pass through the membrane, but not others. Since the nuclear pores are located in an area of high traffic, they play an important role in cell physiology. The space between the outer and inner membranes is called the perinuclear space and is joined with the lumen of the rough ER. The nuclear envelope's structure is determined by a network of intermediate filaments (protein filaments). This network is organized into a mesh-like lining called the nuclear lamina, which binds to chromatin, integral membrane proteins, and other nuclear components along the inner surface of the nucleus. The nuclear lamina is thought to help materials inside the nucleus reach the nuclear pores and in the disintegration of the nuclear envelope during mitosis and its reassembly at the end of the process. The nuclear pores are highly efficient at selectively allowing the passage of materials to and from the nucleus, because the nuclear envelope has a considerable amount of traffic. RNA and ribosomal subunits must be continually transferred from the nucleus to the cytoplasm. Histones, gene regulatory proteins, DNA and RNA polymerases, and other substances essential for nuclear activities must be imported from the cytoplasm. The nuclear envelope of a typical mammalian cell contains 3000–4000 pore complexes. If the cell is synthesizing DNA each pore complex needs to transport about 100 histone molecules per minute. If the cell is growing rapidly, each complex also needs to transport about 6 newly assembled large and small ribosomal subunits per minute from the nucleus to the cytosol, where they are used to synthesize proteins. Endoplasmic reticulum The endoplasmic reticulum (ER) is a membranous synthesis and transport organelle that is an extension of the nuclear envelope. More than half the total membrane in eukaryotic cells is accounted for by the ER. The ER is made up of flattened sacs and branching tubules that are thought to interconnect, so that the ER membrane forms a continuous sheet enclosing a single internal space. This highly convoluted space is called the ER lumen and is also referred to as the ER cisternal space. The lumen takes up about ten percent of the entire cell volume. The endoplasmic reticulum membrane allows molecules to be selectively transferred between the lumen and the cytoplasm, and since it is connected to the nuclear envelope, it provides a channel between the nucleus and the cytoplasm. The ER has a central role in producing, processing, and transporting biochemical compounds for use inside and outside of the cell. Its membrane is the site of production of all the transmembrane proteins and lipids for many of the cell's organelles, including the ER itself, the Golgi apparatus, lysosomes, endosomes, secretory vesicles, and the plasma membrane. Furthermore, almost all of the proteins that will exit the cell, plus those destined for the lumen of the ER, Golgi apparatus, or lysosomes, are originally delivered to the ER lumen. Consequently, many of the proteins found in the cisternal space of the endoplasmic reticulum lumen are there only temporarily as they pass on their way to other locations. Other proteins, however, constantly remain in the lumen and are known as endoplasmic reticulum resident proteins. These special proteins contain a specialized retention signal made up of a specific sequence of amino acids that enables them to be retained by the organelle. An example of an important endoplasmic reticulum resident protein is the chaperone protein known as BiP which identifies other proteins that have been improperly built or processed and keeps them from being sent to their final destinations. The ER is involved in cotranslational sorting of proteins. A polypeptide which contains an ER signal sequence is recognised by the signal recognition particle which halts the production of the protein. The SRP transports the nascent protein to the ER membrane where it is released through a membrane channel and translation resumes. There are two distinct, though connected, regions of ER that differ in structure and function: smooth ER and rough ER. The rough endoplasmic reticulum is so named because the cytoplasmic surface is covered with ribosomes, giving it a bumpy appearance when viewed through an electron microscope. The smooth ER appears smooth since its cytoplasmic surface lacks ribosomes. Functions of the smooth ER In the great majority of cells, smooth ER regions are scarce and are often partly smooth and partly rough. They are sometimes called transitional ER because they contain ER exit sites from which transport vesicles carrying newly synthesized proteins and lipids bud off for transport to the Golgi apparatus. In certain specialized cells, however, the smooth ER is abundant and has additional functions. The smooth ER of these specialized cells functions in diverse metabolic processes, including synthesis of lipids, metabolism of carbohydrates, and detoxification of drugs and poisons. Enzymes of the smooth ER are vital to the synthesis of lipids, including oils, phospholipids, and steroids. Sex hormones of vertebrates and the steroid hormones secreted by the adrenal glands are among the steroids produced by the smooth ER in animal cells. The cells that synthesize these hormones are rich in smooth ER. Liver cells are another example of specialized cells that contain an abundance of smooth ER. These cells provide an example of the role of smooth ER in carbohydrate metabolism. Liver cells store carbohydrates in the form of glycogen. The breakdown of glycogen eventually leads to the release of glucose from the liver cells, which is important in the regulation of sugar concentration in the blood. However, the primary product of glycogen breakdown is glucose-1-phosphate. This is converted to glucose-6-phosphate and then an enzyme of the liver cell's smooth ER removes the phosphate from the glucose, so that it can then leave the cell. Enzymes of the smooth ER can also help detoxify drugs and poisons. Detoxification usually involves the addition of a hydroxyl group to a drug, making the drug more soluble and thus easier to purge from the body. One extensively studied detoxification reaction is carried out by the cytochrome P450 family of enzymes, which catalyze oxidation reactions on water-insoluble drugs or metabolites that would otherwise accumulate to toxic levels in cell membrane. In muscle cells, a specialized smooth ER (sarcoplasmic reticulum) forms a membranous compartment (cisternal space) into which calcium ions are pumped. When a muscle cell becomes stimulated by a nerve impulse, calcium goes back across this membrane into the cytosol and generates the contraction of the muscle cell. Functions of the rough ER Many types of cells export proteins produced by ribosomes attached to the rough ER. The ribosomes assemble amino acids into protein units, which are carried into the rough ER for further adjustments. These proteins may be either transmembrane proteins, which become embedded in the membrane of the endoplasmic reticulum, or water-soluble proteins, which are able to pass through the membrane into the lumen. Those that reach the inside of the endoplasmic reticulum are folded into the correct three-dimensional conformation. Chemicals, such as carbohydrates or sugars, are added, then the endoplasmic reticulum either transports the completed proteins, called secretory proteins, to areas of the cell where they are needed, or they are sent to the Golgi apparatus for further processing and modification. Once secretory proteins are formed, the ER membrane separates them from the proteins that will remain in the cytosol. Secretory proteins depart from the ER enfolded in the membranes of vesicles that bud like bubbles from the transitional ER. These vesicles in transit to another part of the cell are called transport vesicles. An alternative mechanism for transport of lipids and proteins out of the ER are through lipid transfer proteins at regions called membrane contact sites where the ER becomes closely and stably associated with the membranes of other organelles, such as the plasma membrane, Golgi or lysosomes. In addition to making secretory proteins, the rough ER makes membranes that grows in place from the addition of proteins and phospholipids. As polypeptides intended to be membrane proteins grow from the ribosomes, they are inserted into the ER membrane itself and are kept there by their hydrophobic portions. The rough ER also produces its own membrane phospholipids; enzymes built into the ER membrane assemble phospholipids. The ER membrane expands and can be transferred by transport vesicles to other components of the endomembrane system. Golgi apparatus The Golgi apparatus (also known as the Golgi body and the Golgi complex) is composed of separate sacs called cisternae. Its shape is similar to a stack of pancakes. The number of these stacks varies with the specific function of the cell. The Golgi apparatus is used by the cell for further protein modification. The section of the Golgi apparatus that receives the vesicles from the ER is known as the cis face, and is usually near the ER. The opposite end of the Golgi apparatus is called the trans face, this is where the modified compounds leave. The trans face is usually facing the plasma membrane, which is where most of the substances the Golgi apparatus modifies are sent. Vesicles sent off by the ER containing proteins are further altered at the Golgi apparatus and then prepared for secretion from the cell or transport to other parts of the cell. Various things can happen to the proteins on their journey through the enzyme covered space of the Golgi apparatus. The modification and synthesis of the carbohydrate portions of glycoproteins is common in protein processing. The Golgi apparatus removes and substitutes sugar monomers, producing a large variety of oligosaccharides. In addition to modifying proteins, the Golgi also manufactures macromolecules itself. In plant cells, the Golgi produces pectins and other polysaccharides needed by the plant structure. Once the modification process is completed, the Golgi apparatus sorts the products of its processing and sends them to various parts of the cell. Molecular identification labels or tags are added by the Golgi enzymes to help with this. After everything is organized, the Golgi apparatus sends off its products by budding vesicles from its trans face. Vacuoles Vacuoles, like vesicles, are membrane-bound sacs within the cell. They are larger than vesicles and their specific function varies. The operations of vacuoles are different for plant and animal vacuoles. In plant cells, vacuoles cover anywhere from 30% to 90% of the total cell volume. Most mature plant cells contain one large central vacuole encompassed by a membrane called the tonoplast. Vacuoles of plant cells act as storage compartments for the nutrients and waste of a cell. The solution that these molecules are stored in is called the cell sap. Pigments that color the cell are sometime located in the cell sap. Vacuoles can also increase the size of the cell, which elongates as water is added, and they control the turgor pressure (the osmotic pressure that keeps the cell wall from caving in). Like lysosomes of animal cells, vacuoles have an acidic pH and contain hydrolytic enzymes. The pH of vacuoles enables them to perform homeostatic procedures in the cell. For example, when the pH in the cells environment drops, the H+ ions surging into the cytosol can be transferred to a vacuole in order to keep the cytosol's pH constant. In animals, vacuoles serve in exocytosis and endocytosis processes. Endocytosis refers to when substances are taken into the cell, whereas for exocytosis substances are moved from the cell into the extracellular space. Material to be taken-in is surrounded by the plasma membrane, and then transferred to a vacuole. There are two types of endocytosis, phagocytosis (cell eating) and pinocytosis (cell drinking). In phagocytosis, cells engulf large particles such as bacteria. Pinocytosis is the same process, except the substances being ingested are in the fluid form. Vesicles Vesicles are small membrane-enclosed transport units that can transfer molecules between different compartments. Most vesicles transfer the membranes assembled in the endoplasmic reticulum to the Golgi apparatus, and then from the Golgi apparatus to various locations. There are various types of vesicles each with a different protein configuration. Most are formed from specific regions of membranes. When a vesicle buds off from a membrane it contains specific proteins on its cytosolic surface. Each membrane a vesicle travels to contains a marker on its cytosolic surface. This marker corresponds with the proteins on the vesicle traveling to the membrane. Once the vesicle finds the membrane, they fuse. There are three well known types of vesicles. They are clathrin-coated, COPI-coated, and COPII-coated vesicles. Each performs different functions in the cell. For example, clathrin-coated vesicles transport substances between the Golgi apparatus and the plasma membrane. COPI- and COPII-coated vesicles are frequently used for transportation between the ER and the Golgi apparatus. Lysosomes Lysosomes are organelles that contain hydrolytic enzymes that are used for intracellular digestion. The main functions of a lysosome are to process molecules taken in by the cell and to recycle worn out cell parts. The enzymes inside of lysosomes are acid hydrolases which require an acidic environment for optimal performance. Lysosomes provide such an environment by maintaining a pH of 5.0 inside of the organelle. If a lysosome were to rupture, the enzymes released would not be very active because of the cytosol's neutral pH. However, if numerous lysosomes leaked the cell could be destroyed from autodigestion. Lysosomes carry out intracellular digestion, in a process called phagocytosis (from the Greek , to eat and , vessel, referring here to the cell), by fusing with a vacuole and releasing their enzymes into the vacuole. Through this process, sugars, amino acids, and other monomers pass into the cytosol and become nutrients for the cell. Lysosomes also use their hydrolytic enzymes to recycle the cell's obsolete organelles in a process called autophagy. The lysosome engulfs another organelle and uses its enzymes to take apart the ingested material. The resulting organic monomers are then returned to the cytosol for reuse. The last function of a lysosome is to digest the cell itself through autolysis. Spitzenkörper The spitzenkörper is a component of the endomembrane system found only in fungi, and is associated with hyphal tip growth. It is a phase-dark body that is composed of an aggregation of membrane-bound vesicles containing cell wall components, serving as a point of assemblage and release of such components intermediate between the Golgi and the cell membrane. The spitzenkörper is motile and generates new hyphal tip growth as it moves forward. Plasma membrane The plasma membrane is a phospholipid bilayer membrane that separates the cell from its environment and regulates the transport of molecules and signals into and out of the cell. Embedded in the membrane are proteins that perform the functions of the plasma membrane. The plasma membrane is not a fixed or rigid structure, the molecules that compose the membrane are capable of lateral movement. This movement and the multiple components of the membrane are why it is referred to as a fluid mosaic. Smaller molecules such as carbon dioxide, water, and oxygen can pass through the plasma membrane freely by diffusion or osmosis. Larger molecules needed by the cell are assisted by proteins through active transport. The plasma membrane of a cell has multiple functions. These include transporting nutrients into the cell, allowing waste to leave, preventing materials from entering the cell, averting needed materials from leaving the cell, maintaining the pH of the cytosol, and preserving the osmotic pressure of the cytosol. Transport proteins which allow some materials to pass through but not others are used for these functions. These proteins use ATP hydrolysis to pump materials against their concentration gradients. In addition to these universal functions, the plasma membrane has a more specific role in multicellular organisms. Glycoproteins on the membrane assist the cell in recognizing other cells, in order to exchange metabolites and form tissues. Other proteins on the plasma membrane allow attachment to the cytoskeleton and extracellular matrix; a function that maintains cell shape and fixes the location of membrane proteins. Enzymes that catalyze reactions are also found on the plasma membrane. Receptor proteins on the membrane have a shape that matches with a chemical messenger, resulting in various cellular responses. Evolution The origin of the endomembrane system is linked to the origin of eukaryotes themselves and the origin of eukaryoties to the endosymbiotic origin of mitochondria. Many models have been put forward to explain the origin of the endomembrane system (reviewed in). The most recent concept suggests that the endomembrane system evolved from outer membrane vesicles the endosymbiotic mitochondrion secreted, and got enclosed within infoldings of the host prokaryote (in turn, a result of the ingestion of the endosymbiont). This OMV (outer membrane vesicles)-based model for the origin of the endomembrane system is currently the one that requires the fewest novel inventions at eukaryote origin and explains the many connections of mitochondria with other compartments of the cell. Currently, this "inside-out" hypothesis (which states that the alphaproteobacteria, the ancestral mitochondria, were engulfed by the blebs of an asgardarchaeon, and later the blebs fused leaving infoldings which would eventually become the endomembrane system) is favored more than the outside-in one (which suggested that the endomembrane system arose due to infoldings within the archaeal membrane). References Cell anatomy Membrane biology
Endomembrane system
Chemistry
4,953
27,032
https://en.wikipedia.org/wiki/Rock%20paper%20scissors
Rock paper scissors (also known by several other names and word orders, see § Names) is an intransitive hand game, usually played between two people, in which each player simultaneously forms one of three shapes with an outstretched hand. These shapes are "rock" (a closed fist), "paper" (a flat hand), and "scissors" (a fist with the index finger and middle finger extended, forming a V). The earliest form of "rock paper scissors"-style game originated in China and was subsequently imported into Japan, where it reached its modern standardized form, before being spread throughout the world in the early 20th century. A simultaneous, zero-sum game, it has three possible outcomes: a draw, a win, or a loss. A player who decides to play rock will beat another player who chooses scissors ("rock crushes scissors" or "breaks scissors" or sometimes "blunts scissors"), but will lose to one who has played paper ("paper covers rock"); a play of paper will lose to a play of scissors ("scissors cuts paper"). If both players choose the same shape, the game is tied, but is usually replayed until there is a winner. Rock paper scissors is often used as a fair choosing method between two people, similar to coin flipping, drawing straws, or throwing dice in order to settle a dispute or make an unbiased group decision. Unlike truly random selection methods, however, rock paper scissors can be played with some degree of skill by recognizing and exploiting non-random behavior in opponents. Etymology The name "rock paper scissors" is simply a translation of the Japanese words for the three gestures involved in the game, though the Japanese name for the game is different. The name Roshambo or Rochambeau has been claimed to refer to Count Rochambeau, who allegedly played the game during the American Revolutionary War. The legend that he played the game is apocryphal, as all evidence points to the game being brought to the United States later than 1910; if this name has anything to do with him it is for some other reason. It is unclear why this name became associated with the game, with hypotheses ranging from a slight phonetic similarity with the Japanese name jan-ken-pon, to the presence of a statue of Rochambeau in a neighborhood of Washington, D.C. Names The modern game is known by several other names such as Rochambeau, Roshambo, Ro-sham-bo, Bato Bato Pik, and Jak-en-poy. While the game's name is a list of three items, different countries often have the list in a different order. In North America and the United Kingdom, it is known as "rock, paper, scissors" or "scissors, paper, stone". If this name is chanted while actually playing the game, it might be followed by an exclamation of "shoot" at the moment when the players are to reveal their choice (i.e. "Rock, paper, scissors, shoot!"). In Australia, the most common name is "scissors, paper, rock" (the reverse of the American format). There have been claims that there are regional variations of the name in Australia; the video claimed that it was referred to as "scissors, paper, rock" in New South Wales, "rock, paper, scissors" in Victoria, South Australia and Western Australia and "paper, scissors, rock" in Queensland, though this has been disputed. In New Zealand, the most common name in English is "paper, scissors, rock". In Māori, it is known as (). In France, the game is sometimes called Shifumi (sometimes spelled Chifoumi) Gameplay The players may start by counting to three aloud, or by speaking the name of the game (e.g. "Rock! Paper! Scissors!"), raising one hand in a fist and swinging it down with each syllable onto their other hand (or in a less common variant, holding it behind their back). They then "throw" or "shoot" by extending their selected sign towards their opponent on what would have been the fourth count, often saying the word "shoot" while doing so. Variations include a version where players throw immediately on the third count (thus throwing on the count of "Scissors!"), a version including five counts rather than four ("Rock! Paper! Scissors! Says! Shoot!", almost exclusively localized in the United States to Long Island and some parts of New York City), a version where players say “Scissors! Paper! Rock!”, and a version where players shake their hands three times before "throwing". History Origins The first known mention of the game was in the book by the Ming-dynasty writer ( 1600), who wrote that the game dated back to the time of the Han dynasty (206 BCE – 220 CE). In the book, the game was called shoushiling. Li Rihua's book Note of Liuyanzhai also mentions this game, calling it shoushiling (t. 手勢令; s. 手势令), huozhitou (t. 豁指頭; s. 豁指头), or huaquan (划拳). From China the game was brought to Japan. Throughout Japanese history there are frequent references to sansukumi-ken, meaning ken (fist) games "of the three who are afraid of one another" (i.e. A beats B, B beats C, and C beats A). The earliest sansukumi-ken in Japan was apparently mushi-ken (虫拳), a version imported directly from China. In mushi-ken the "frog" (represented by the thumb) triumphs over the "slug" (represented by the little finger), which, in turn prevails over the "snake" (represented by the index finger), which triumphs over the "frog". (The Chinese and Japanese versions differ in the animals represented; in adopting the game, the Chinese characters for the venomous centipede (蜈蜙) were apparently confused with the characters for the slug (蛞蝓)). The most popular sansukumi-ken game in Japan was kitsune-ken (狐拳). In this game, a fox (狐), often attributed supernatural powers in Japanese folklore, defeats the village head, the village head (庄屋) defeats the hunter, and the hunter (猟師) defeats the fox. Kitsune-ken, unlike mushi-ken or rock–paper–scissors, requires gestures with both hands. Today, the best-known sansukumi-ken is called , which is a variation of the Chinese games introduced in the 17th century. Jan-ken uses the rock, paper, and scissors signs and is the direct source of the modern version of rock paper scissors. Hand-games using gestures to represent the three conflicting elements of rock, paper, and scissors have been most common since the modern version of the game was created in the late 19th century, between the Edo and Meiji periods. Spread beyond East Asia By the early 20th century, rock paper scissors had spread beyond East Asia, especially through increased Japanese contact with the west. Its English-language name is therefore taken from a translation of the names of the three Japanese hand-gestures for rock, paper and scissors; elsewhere in East Asia the open-palm gesture represents "cloth" rather than "paper". The shape of the scissors is also adopted from the Japanese style. A 1921 article about cricket in the Sydney Morning Herald described "stone, scissors, and paper" as a "Teutonic method of drawing lots", which the writer "came across when travelling on the Continent once". Another article, from the same year, the Washington Herald described it as a method of "Chinese gambling". In Britain in 1924 it was described in a letter to The Times as a hand game, possibly of Mediterranean origin, called "zhot". A reader then wrote in to say that the game "zhot" referred to was evidently Jan-ken-pon, which she had often seen played throughout Japan. Although at this date the game appears to have been new enough to British readers to need explaining, the appearance by 1927 of Gerard Fairlies popular thriller novel with the title Scissors Cut Paper, followed by Fairlie's Stone Blunts Scissors (1929), suggests it quickly became popular. The game is referred to in two of Hildegard G. Frey's novels in the Campfire Girls series: The Campfire Girls Go Motoring (1916) and The Campfire Girls Larks and Pranks (1917), which suggests that it was known in America at least that early. The first passage where it appears says "In order that no feelings might be involved in any way over which car we other girls traveled in, Nyoda, Solomon-like, proposed that she and Gladys play 'John Kempo' for us. (That isn't spelled right, but no matter.)" There is no explanation in any of the places where it is referenced of what the game actually is. This suggests that the author at least believed that the game was well known enough in America that her readers would understand the reference. In 1927 La Vie au patronage : organe catholique des œuvres de jeunesse, a children's magazine in France, described it in detail, referring to it as a "jeu japonais" ("Japanese game"). Its French name, "Chi-fou-mi", is based on the Old Japanese words for "one, two, three" ("hi, fu, mi"). A 1932 New York Times article on the Tokyo rush hour describes the rules of the game for the benefit of American readers, suggesting it was not at that time widely known in the U.S. Likewise, the trick-taking card game “Jan-Ken-Po”, first published in 1934, describes the rules of the hand-game without mentioning any American game along the lines of “rock paper scissors”. The 1933 edition of the Compton's Pictured Encyclopedia described it as a common method of settling disputes between children in its article on Japan; the name was given as "John Kem Po" and the article pointedly asserted, "This is such a good way of deciding an argument that American boys and girls might like to practice it too." Strategies It is impossible to gain an advantage over an opponent that chooses their move uniformly at random. However, it is possible to gain a significant advantage over a non-random player by predicting their move, which can be done by exploiting psychological effects or by analyzing statistical patterns of their past behavior. As a result, there have been programming competitions for algorithms that play rock paper scissors. During tournaments, players often prepare their sequence of three gestures prior to the tournament's commencement. Some tournament players employ tactics to confuse or trick the other player into making an illegal move, resulting in a loss. One such tactic is to shout the name of one move before throwing another, in order to misdirect and confuse their opponent. The "rock" move, in particular, is notable in that it is typically represented by a closed fist—often identical to the fist made by players during the initial countdown. If a player is attempting to beat their opponent based on quickly reading their hand gesture as the players are making their moves, it is possible to determine if the opponent is about to throw "rock" based on their lack of hand movement, as both "scissors" and "paper" require the player to reposition their hand. This can likewise be used to deceive an anticipating opponent by keeping one's fist closed until the last possible moment, leading them to believe that one is about to throw "rock". Algorithms As a consequence of rock paper scissors programming contests, many strong algorithms have emerged. For example, Iocaine Powder, which won the First International RoShamBo Programming Competition in 1999, uses a heuristically designed compilation of strategies. For each strategy it employs, it also has six metastrategies which defeat second-guessing, triple-guessing, as well as second-guessing the opponent, and so on. The optimal strategy or metastrategy is chosen based on past performance. The main strategies it employs are history matching, frequency analysis, and random guessing. Its strongest strategy, history matching, searches for a sequence in the past that matches the last few moves in order to predict the next move of the algorithm. In frequency analysis, the program simply identifies the most frequently played move. The random guess is a fallback method that is used to prevent a devastating loss in the event that the other strategies fail. There have since been some innovations, such as using multiple history-matching schemes that each match a different aspect of the history – for example, the opponent's moves, the program's own moves, or a combination of both. There have also been other algorithms based on Markov chains. In 2012, researchers from the Ishikawa Watanabe Laboratory at the University of Tokyo created a robot hand that can play rock paper scissors with a 100% win rate against a human opponent. Using a high-speed camera the robot recognizes within one millisecond which shape the human hand is making, then produces the corresponding winning shape. Variations Players have developed numerous cultural and personal variations on the game, from simply playing the same game with different objects, to expanding into more weapons and rules, to giving their own name to the game in their national language. Rock paper scissors minus one A variation that appeared in the second season of Squid Game. In the Netflix show, losing resulted in a risk to be shot in the head with a revolver by means of russian roulette. In this version, players use both hands. After simultaneously choosing their symbols, they check out their opponents choice and then take away one hand each at the same time. The winner is decided by the remaining symbols. Rock beats scissors, paper beats rock and scissors beat paper. If player one chooses the same symbol twice, for example, paper, and player two chooses, for example, rock and scissors, player two can easily win, because player one has nothing to beat player two with if player two picks scissors. If the game results in a tie, players start over and continue playing until one of them wins. Adapted rules In Korea, where the standard version of the game is called gawi-bawi-bo, a two-player upgraded version exists by the name muk-jji-ppa. After showing their hands, the player with the winning throw shouts "muk-jji-ppa!" upon which both players throw again. If they throw differently (for example, rock and paper, or paper and scissors), whoever wins this second round shouts "muk-jji-ppa!" and thus the play continues until both players throw the same item (for example, rock and rock), at which point whoever was the last winner becomes the actual winner. In another popular two-handed variant, one player will shout "minus one" after the initial play. Each player removes one hand, and the winner is decided by the remaining hands in play. In Japan, a strip game variant of rock paper scissors is known as 野球拳 (Yakyuken). The loser of each round removes an article of clothing. The game is a minor part of porn culture in Japan and other Asian countries after the influence of TV variety shows and Soft On Demand. In the Philippines, the game is called jak-en-poy (from the Japanese jankenpon). In a longer version of the game, a four-line song is sung, with hand gestures displayed at the end of each (or the final) line: "Jack-en-poy! / Hali-hali-hoy! / Sino'ng matalo, / siya'ng unggoy!" ("Jack-en-poy! / Hali-hali-hoy! / Whoever loses is the monkey!") In the former case, the person with the most wins at the end of the song, wins the game. A shorter version of the game uses the chant "Bato-bato-pick" ("Rock-rock-pick [i.e. choose]") instead. A multiple player variation can be played: Players stand in a circle and all throw at once. If rock, paper, and scissors are all thrown, it is a stalemate, and they rethrow. If only two throws are present, all players with the losing throw are eliminated. Play continues until only the winner remains. Different weapons In Indonesia, the game is called suten, suit or just sut, and the three signs are elephant (slightly raised thumb), human (outstreched index finger) and ant (outstreched pinky finger). Elephant is stronger than human, human is stronger than ant, but elephant is afraid of the ant. Using the same tripartite division, there is a full-body variation in lieu of the hand signs called "Bear, Hunter, Ninja". In this iteration the participants stand back-to-back and at the count of three (or ro-sham-bo as is traditional) turn around facing each other using their arms evoking one of the totems. The players' choices break down as: Hunter shoots bear; Bear eats ninja; Ninja kills hunter. Additional weapons Generalized rock-paper-scissors games where the players have a choice of more than three weapons have been studied. Any variation of rock paper scissors is an oriented graph, where the nodes represent the symbols (weapons) choosable by the players, and an edge from A to B means that A defeats B. Each oriented graph is a potentially playable rock paper scissors game. According to theoretical calculations, the number of distinguishable (i.e. not isomorphic) oriented graphs grows with the number of weapons = 3, 4, 5, ... as follows: 7, 42, 582, 21480, 2142288, 575016219, 415939243032, … . The French game pierre, papier, ciseaux, puits (stone, paper, scissors, well) is unbalanced; both the stone and scissors fall in the well and lose to it, while paper covers both stone and well. This means two "weapons", well and paper, can defeat two moves, while the other two weapons each defeat only one of the other three choices. The stone has no advantage to well, so optimal strategy is to play each of the other objects (paper, scissors and well) one-third of the time. Variants in which the number of moves is an odd number and each move defeats exactly half of the other moves while being defeated by the other half are typically considered. Variations with up to 101 different moves have been published. Adding new gestures has the effect of reducing the odds of a tie, while increasing the complexity of the game. The probability of a tie in an odd-number-of-weapons game can be calculated based on the number of weapons n as 1/n, so the probability of a tie is 1/3 in standard rock paper scissors, but 1/5 in a version that offered five moves instead of three. One popular five-weapon expansion is "", invented by Sam Kass and Karen Bryla, which adds "Spock" and "lizard" to the standard three choices. "Spock" is signified with the Star Trek Vulcan salute, while "lizard" is shown by forming the hand into a sock-puppet-like mouth. Spock smashes scissors and vaporizes rock; he is poisoned by lizard and disproved by paper. Lizard poisons Spock and eats paper; it is crushed by rock and decapitated by scissors. This variant was mentioned in a 2005 article in The Times of London and was later the subject of an episode of the American sitcom The Big Bang Theory in 2008 (as rock-paper-scissors-lizard-Spock). A game-theoretic analysis showed that 4 variants of 582 possible variations using 5 different weapons have non-trivial mixed strategy equilibria. The most representative game of these 4 is "rock, paper, scissors, fire, water". Rock beats scissors, paper beats rock, scissors beats paper, fire beats everything except water, and water is beaten by everything except it beats fire. The perfect game-theoretic strategy is to use rock, paper, and scissors of the time and of the time for fire and water. Nevertheless, experiments show that people underuse water and overuse rock, paper, and scissors in this game. Analogues in real life Lizard mating strategies The common side-blotched lizard (Uta stansburiana) exhibits a rock paper scissors pattern in its mating strategies. Of its three throat color types of males, "orange beats blue, blue beats yellow, and yellow beats orange" in competition for females, which is similar to the rules of rock-paper-scissors. Bacteria Some bacteria also exhibit a rock paper scissors dynamic when they engage in antibiotic production. The theory for this finding was demonstrated by computer simulation and in the laboratory by Benjamin Kerr, working at Stanford University with Brendan Bohannan. Additional in vitro results demonstrate rock paper scissors dynamics in additional species of bacteria. Biologist Benjamin C. Kirkup Jr. demonstrated that these antibiotics, bacteriocins, were active as Escherichia coli compete with each other in the intestines of mice, and that the rock paper scissors dynamics allowed for the continued competition among strains: antibiotic-producers defeat antibiotic-sensitives; antibiotic-resisters multiply and withstand and out-compete the antibiotic-producers, letting antibiotic-sensitives multiply and out-compete others; until antibiotic-producers multiply again. Rock paper scissors is the subject of continued research in bacterial ecology and evolution. It is considered one of the basic applications of game theory and non-linear dynamics to bacteriology. Models of evolution demonstrate how intragenomic competition can lead to rock paper scissors dynamics from a relatively general evolutionary model. The general nature of this basic non-transitive model is widely applied in theoretical biology to explore bacterial ecology and evolution. Mechanical devices and geometrical constructions In the televised robot combat competition BattleBots, relations between "lifters, which had wedged sides and could use forklift-like prongs to flip pure wedges", "spinners, which were smooth, circular wedges with blades on their bottom side for disabling and breaking lifters", and "pure wedges, which could still flip spinners" are analogical to relations in rock paper scissors games and called "robot Darwinism". Instances of usage American court case In 2006, American federal judge Gregory Presnell from the Middle District of Florida ordered opposing sides in a lengthy court case to settle a trivial (but lengthily debated) point over the appropriate place for a deposition using the game of rock paper scissors. The ruling in Avista Management v. Wausau Underwriters stated: Auction house selection In 2005, when Takashi Hashiyama, CEO of Japanese television equipment manufacturer Maspro Denkoh, decided to auction off the collection of Impressionist paintings owned by his corporation, including works by Paul Cézanne, Pablo Picasso, and Vincent van Gogh, he contacted two leading auction houses, Christie's International and Sotheby's Holdings, seeking their proposals on how they would bring the collection to the market as well as how they would maximize the profits from the sale. Both firms made elaborate proposals, but neither was persuasive enough to earn Hashiyama's approval. Unwilling to split up the collection into separate auctions, Hashiyama asked the firms to decide between themselves who would hold the auction, which included Cézanne's Large Trees Under the Jas de Bouffan, estimated to be worth between $12 million to $16 million. The houses were unable to reach a decision. Hashiyama told the two firms to play rock paper scissors to decide who would get the rights to the auction, explaining that "it probably looks strange to others, but I believe this is the best way to decide between two things which are equally good." The auction houses had a weekend to come up with a choice of move. Christie's went to the 11-year-old twin daughters of the international director of Christie's Impressionist and Modern Art Department Nicholas Maclean, who suggested "scissors" because "Everybody expects you to choose 'rock'." Sotheby's said that they treated it as a game of chance and had no particular strategy for the game, but went with "paper". Christie's won the match and sold the $20 million collection, earning millions of dollars of commission for the auction house. FA Women's Super League match Prior to a 26 October 2018 match in the FA Women's Super League, the referee, upon being without a coin for the pregame coin toss, had the team captains play rock paper scissors to determine which team would kick-off. The referee was subsequently suspended for three weeks by The Football Association. Play by chimpanzees In Japan, researchers have taught chimpanzees to identify winning hands according to the rules of rock paper scissors. Game design In many games, it is common for a group of possible choices to interact in a rock paper scissors style, where each selection is strong against a particular choice, but weak against another. Such mechanics can make a game somewhat self-balancing, prevent gameplay from being overwhelmed by a single dominant strategy and single dominant type of unit. Many card-based video games in Japan use the rock paper scissors system as their core fighting system, with the winner of each round being able to carry out their designated attack. In Alex Kidd in Miracle World, the player has to win games of rock paper scissors against each boss to proceed. Others use simple variants of rock paper scissors as subgames. Many Nintendo role-playing games prominently feature a rock paper scissors gameplay element. In Pokémon, there is a rock paper scissors element in the type effectiveness system. For example, a Grass-typed Pokémon is weak to Fire, Fire is weak to Water, and Water is weak to Grass. In the 3DS remake of Mario & Luigi: Superstar Saga and Mario & Luigi: Bowser's Inside Story, the battles in the second mode use a “Power Triangle” system based on the game's three attack types: Melee, Ranged, and Flying. In the Fire Emblem series of strategy role-playing games, the Weapon Triangle and Trinity of Magic influence the hit and damage rates of weapon types based on whether they are at an advantage or a disadvantage in their respective rock paper scissors system. In the Super Smash Bros. series, the three basic actions used during battles are described in their respective rock paper scissors system: attack, defense, and grab. The "Card-Jitsu" minigame in Club Penguin is a rock-paper-scissors game using cards that represent the three elements, Fire, Water and Snow. Fire beats snow, snow beats water, water beats fire. Tournaments Various competitive rock paper scissors tournaments have been organised by different groups. World Rock Paper Scissors Association Started in 2015, the WRPSA has hosted Professional Rock Paper Scissors Tournaments all around the world. World Rock Paper Scissors Society The World Rock Paper Scissors Society hosted Professional Rock Paper Scissors Tournaments from 2002 to 2009. These open, competitive championships were widely attended by players from around the world and attracted widespread international media attention. WRPS events were noted for their large cash prizes, elaborate staging, and colorful competitors. In 2004, the championships were broadcast on the U.S. television network Fox Sports Net (later known as Bally Sports), with the winner being Lee Rammage, who went on to compete in at least one subsequent championship. The 2007 tournament was won by Andrea Farina. The last tournament hosted by the World RPS Society was in Toronto, Canada, on November 14, 2009. UK championships Several RPS events have been organised in the United Kingdom by Wacky Nation. The 1st UK Championship took place on 13 July 2007, and were then held annually. The 2019 event was won by Ellie Mac, who went on to pick up the cash prize of £20,000 but was unable to double her earnings in 2020 due to the coronavirus outbreak. USARPS tournaments USA Rock Paper Scissors League is sponsored by Bud Light. Leo Bryan Pacis was the first commissioner of the USARPS. Cody Louis Brown was elected as the second commissioner of the USARPS in 2014. In April 2006, the inaugural USARPS Championship was held in Las Vegas. Following months of regional qualifying tournaments held across the US, 257 players were flown to Las Vegas for a single-elimination tournament at the House of Blues where the winner received $50,000. The tournament was shown on the A&E Network on 12 June 2006. The $50,000 2007 USARPS Tournament took place at the Las Vegas Mandalay Bay in May 2007. In 2008, Sean "Wicked Fingers" Sears beat 300 other contestants and walked out of the Mandalay Bay Hotel and Casino with $50,000 after defeating Julie "Bulldog" Crossley in the finals. The inaugural Budweiser International Rock, Paper, Scissors Federation Championship was held in Beijing, China after the close of the 2008 Summer Olympics at Club Bud. A Belfast man won the competition. National XtremeRPS Competition 2007–2008 The XtremeRPS National Competition is a US nationwide RPS competition with Preliminary Qualifying contests that started in January 2007 and ended in May 2008, followed by regional finals in June and July 2008. The national finals were to be held in Des Moines, Iowa, in August 2008, with a chance to win up to $5,000. Guinness Book of World Records The largest rock paper scissors tournament hosted 2,950 players and was achieved by Oomba, Inc. (USA) at Gen Con 2014 in Indianapolis, Indiana, United States, on 17 August 2014. World Series Former Celebrity Poker Showdown host and USARPS Head Referee Phil Gordon has hosted an annual $500 World Series of Rock Paper Scissors event in conjunction with the World Series of Poker since 2005. The winner of the WSORPS receives an entry into the WSOP Main Event. The event is an annual fundraiser for the "Cancer Research and Prevention Foundation" via Gordon's charity Bad Beat on Cancer. Poker player Annie Duke won the Second Annual World Series of Rock Paper Scissors. The tournament is taped by ESPN and highlights are covered during "The Nuts" section of ESPN's annual WSOP broadcast. 2009 was the fifth year of the tournament. Jackpot En Poy of Eat Bulaga! Jackpot En Poy is a game segment on the Philippines' longest running noontime variety show, Eat Bulaga!. The game is based on the classic children's game rock paper scissors (Jak-en-poy in Filipino, derived from the Japanese Jan-ken-pon) where four players are paired to compete in the three-round segment. In the first round, the first pair plays against each other until one player wins three times. The next pair then plays against each other in the second round. The winners from the first two rounds then compete against each other to finally determine the ultimate winner. The winner of the game then moves on to the final round. In the final round, the player is presented with several Dabarkads, each holding different amounts of cash prize. The player will then pick three Dabarkads who they will play rock paper scissors against. The player plays against them one at a time. If the player wins against any of the Eat Bulaga! hosts, they will win the cash prize. See also Chopsticks (hand game) Matching pennies, the binary equivalent Morra (game), another hand game for deciding trivial matters Intransitive dice Rock paper scissors and human social cyclic behavior Simultaneous action selection Mixed strategy References Notes Bibliography Culin, Stewart (1895) Korean Games, With Notes on the Corresponding Games at China and Japan. (evidence of nonexistence of rock paper scissors in the West) Gomme, Alice Bertha (1894, 1898) The traditional games of England, Scotland, and Ireland, 2 vols. (more evidence of nonexistence of rock paper scissors in the West) Opie, Iona & Opie, Peter (1969) Children's Games in Street and Playground Oxford University Press, London. (Details some variants on rock paper scissors such as 'Man, Earwig, Elephant' in Indonesia, and presents evidence for the existence of 'finger throwing games' in Egypt as early as 2000 B.C.) Baldwin, Wyatt (2017) The Official Rock Paper Scissors Handbook . The Official Strategy Guide of the World Rock Paper Scissors Association Walker, Douglas & Walker, Graham (2004) The Official Rock Paper Scissors Strategy Guide. Fireside. (strategy, tips and culture from the World Rock Paper Scissors Society). External links A biological example of rock paper scissors: Interview with biologist Barry Sinervo on the 7th Avenue Project Radio Show The World Rock Paper Scissors Association Rock Paper Scissors Programming Competition Rock Paper Scissors online remote edition Children's games Games of chance Hand games Chinese games Sampling (statistics) Game theory game classes
Rock paper scissors
Mathematics
6,886
11,657,734
https://en.wikipedia.org/wiki/Slicer%20%28guitar%20effect%29
A slicer is an effects unit which is similar to a tremolo, vibrato, phaser, or autopan. It combines a modulation sequence with a noise gate or envelope filter to create a percussive and rhythmic effect like a helicopter, with rapid cutting out and coming in—on and off. Most have variable speeds and depths, creating different sounds. It may be implemented through an effects unit or a VST. The Boss SL-20 is an example of a slicer effect in a guitar pedal. References Electronic musical instruments Audio effects Effects units Audio engineering Sound recording
Slicer (guitar effect)
Engineering
120
34,275,630
https://en.wikipedia.org/wiki/NONCODE
The NONCODE database is a collection of expression and functional lncRNA data obtained from re-annotated microarray studies. See also lncRNA References External links http://www.noncode.org Biological databases RNA Non-coding RNA
NONCODE
Biology
54
34,486,288
https://en.wikipedia.org/wiki/BOP%20clade
The BOP clade (sometimes BEP clade) is one of two major lineages (or clades) of undefined taxonomic rank in the grasses (Poaceae), containing more than 5,400 species, about half of all grasses. Well-known members of this clade include rice, some major cereals such as wheat, barley, oat, and rye, many lawn and pasture grasses, and bamboos. Its sister group is the PACMAD clade; in contrast with many species of that group who have evolved C4 photosynthesis, the BOP grasses all use the C3 photosynthetic pathway. The clade contains three subfamilies from whose initials its name derives: the bamboos (Bambusoideae); Oryzoideae (syn. Ehrhartoideae), including rice; and Pooideae, mainly distributed in temperate regions, with the largest diversity and important cereal crops such as wheat and barley. Oryzoideae is the earliest-diverging lineage, sister to the bamboos and Pooideae: References Poaceae Photosynthesis Plant unranked clades
BOP clade
Chemistry,Biology
233
21,718,271
https://en.wikipedia.org/wiki/Novay
Novay, formally known as the Telematica Instituut was a Dutch research institute in the field of information technology, founded in 1997, known for its development of ArchiMate. In 2009 the Telematica Instituut was reorganized and operated under the new name Novay. It filed for bankruptcy April 3, 2014, and is dissolved. Overview Novay was a public-private partnership of knowledge institutes and companies with the objective of increasing the competitive power and innovative capability of the Dutch business community. It is managed and funded by top companies and the government. It aims to translate fundamental knowledge into market-oriented research for the public and private sectors in the field of telematics: multimedia, electronic commerce, mobile communications, CSCW, knowledge management, etc. The work of Novay focused on total solutions that can be applied directly in businesses and society at large. It gathers insights from a wide range of subject areas, such as information technology, business economics, organisational science, psychology and sociology. These insights are developed further in multidisciplinary teams and forged into new concepts, products and services in the area of information and communication technology. Research and development Notable Telematica Instituut/Novay research projects have focussed on: Testbed, an architecture of business processes: a software tool and supporting resources for analysing, designing, simulating and introducing business processes. The tool provides information on the consequences of changes in business processes in areas such as service levels, production time and costs, workflow and automation, even before the actual processes have been introduced. In addition, the graphical interface makes the processes understandable for non-specialists. Friends, Customised Internet services: This project delivered a component-based middleware platform for the development, rollout and use of all kinds of Internet services. ArchiMate : An open and independent modelling language for enterprise architecture, supported by different tool vendors and consulting firms. It provides instruments to support enterprise architects in describing, analyzing and visualizing the relationships among business domains in an unambiguous way. ArchiMate is one of the open standards hosted by the Open Group and based on the IEEE 1471 standard. The ArchiMate framework was developed by the Telematica Instituut to offer companies a simple and intuitive concept that reconciles both IT and business aspects. The standard is constantly being enhanced through the work of an international forum, the ArchiMate forum. See also European Research Center for Information Systems IYOUIT Maeslantkering MobileHCI Personal knowledge management References External links Novay.nl confirming "Novay ends its activities" Enterprise modelling Information technology research institutes Research institutes in the Netherlands
Novay
Engineering
543
1,600,718
https://en.wikipedia.org/wiki/Iodine%E2%80%93starch%20test
The iodine–starch test is a chemical reaction that is used to test for the presence of starch or for iodine. The combination of starch and iodine is intensely blue-black. The interaction between starch and the triiodide anion () is the basis for iodometry. History and principles The iodine–starch test was first described in 1814 by Jean-Jacques Colin and Henri-François Gaultier de Claubry, and independently by Friedrich Stromeyer the same year. In 1937, Canadian-American biochemist Charles S. Hanes extensively investigated the action of amylases on starch and the changes in iodine coloration during starch degradation and proposed a spiral chain conformation for the starch molecule, suggesting that fragments with more than one complete coil of the spiral might be necessary for iodine coloration. Karl Freudenberg et al., in 1939, building upon Hanes' helical model, proposed that the helical conformation of amylose creates a hydrophobic cavity lined with CH groups, which attracts iodine molecules and leads to a shift in iodine's absorption spectrum, explaining the characteristic blue color of the complex. This model was subsequently confirmed by Robert E. Rundle and co-workers ca. 1943, who used X-ray diffraction and optical studies to provide experimental evidence for the linear arrangement of iodine molecules within the amylose helix. Research in the mid-20th century began to highlight the importance of iodide anion (as opposed to neutral molecules) in the complex formation, particularly in aqueous solutions. Studies by Mukherjee and Bhattacharyya demonstrated in 1946 that varying potassium iodide concentrations affected the ratio of I- to I2 in the complex. Thoma and French in 1960 further emphasized the necessity of iodide for complex formation in aqueous media. By the 1980s, the presence of polyiodide chains within the amylose helix became widely accepted. However, the precise composition/structure of these chains, including the balance between molecular iodine and various iodide anions, continues to be debated and investigated, with a 2022 article suggesting that they might alternate. The triiodide anion instantly produces an intense blue-black colour upon contact with starch. The intensity of the colour decreases with increasing temperature and with the presence of water-miscible organic solvents such as ethanol. The test cannot be performed at very low pH due to the hydrolysis of the starch under these conditions. It is thought that the iodine–iodide mixture combines with the starch to form an infinite polyiodide homopolymer. This was rationalized through single crystal X-ray crystallography and comparative Raman spectroscopy. Starch as an indicator Starch is often used in chemistry as an indicator for redox titrations where triiodide is present. Starch forms a very dark blue-black complex with triiodide. However, the complex is not formed if only iodine or only iodide (I−) is present. The colour of the starch complex is so deep, that it can be detected visually when the concentration of the iodine is as low as 20 μM at 20 °C. During iodine titrations, concentrated iodine solutions must be reacted with some titrant, often thiosulfate, in order to remove most of the iodine before the starch is added. This is due to the insolubility of the starch–triiodide complex which may prevent some of the iodine reacting with the titrant. Close to the endpoint, the starch is added, and the titration process is resumed taking into account the amount of thiosulfate added before adding the starch. The color change can be used to detect moisture or perspiration, as in the Minor test or starch–iodine test. Starch is also useful in detecting the enzyme amylase, which breaks down starch into sugars. Many bacteria like Bacillus subtilis can produce such an enzyme to help scientists identify unknown bacterial samples -- the starch-iodine test is one of many tests needed to identify the exact bacterium. The positive test for bacteria that has starch hydrolysis capabilities (able to produce amylase) is the presence of a yellow zone around a colony when iodine is added to detect starch. Medical use Although the starch-iodine test is predominantly employed in the lab, recent assessments have shown potential for clinical use, such as confirming the diagnosis of Horner's syndrome. Hospitals with limited technical accessibility can exploit this diagnostic tool since it requires resources that may be easily attainable. In order to perform the experiment, a patient's skin is first dried with 70% alcohol; with the iodine solution added, subsequently. After the skin dries completely once more, it will be dusted with a starch material. Inducing sweating conditions will cause the skin to turn dark blue. Physicians can then make a diagnosis if the test shows sweating of different intensities on the left and right side of the body. See also Lugol's iodine Counterfeit banknote detection pen References Further reading Vogel's Textbook of Quantitative Chemical Analysis, 5th edition. External links How does starch indicate iodine? General Chemistry Online Iodine test at Braukaiser Titrations.info: Potentiometric titration--Solutions used in iodometric titrations Biochemistry detection methods Carbohydrate methods Chemical tests Laboratory techniques Iodine Polyhalides Starch
Iodine–starch test
Chemistry,Biology
1,156
66,496,637
https://en.wikipedia.org/wiki/MACS%202129-1
MACS 2129-1 is an early universe so-called 'dead' disk galaxy discovered in 2017 by the Hubble Space Telescope from NASA. It lies approximately 10 billion light-years away from Earth (current distance 18 billion light years) . MACS 2129-1 has been described as 'dead' as it has ceased making new stars. See also List of galaxies References Galaxies Aquarius (constellation) Discoveries by the Hubble Space Telescope
MACS 2129-1
Astronomy
92
14,835,582
https://en.wikipedia.org/wiki/Algebraic%20combinatorics
Algebraic combinatorics is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra. History The term "algebraic combinatorics" was introduced in the late 1970s. Through the early or mid-1990s, typical combinatorial objects of interest in algebraic combinatorics either admitted a lot of symmetries (association schemes, strongly regular graphs, posets with a group action) or possessed a rich algebraic structure, frequently of representation theoretic origin (symmetric functions, Young tableaux). This period is reflected in the area 05E, Algebraic combinatorics, of the AMS Mathematics Subject Classification, introduced in 1991. Scope Algebraic combinatorics has come to be seen more expansively as an area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. Thus the combinatorial topics may be enumerative in nature or involve matroids, polytopes, partially ordered sets, or finite geometries. On the algebraic side, besides group theory and representation theory, lattice theory and commutative algebra are commonly used. Important topics Symmetric functions The ring of symmetric functions is a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates (but its elements are neither polynomials nor functions). Among other things, this ring plays an important role in the representation theory of the symmetric groups. Association schemes An association scheme is a collection of binary relations satisfying certain compatibility conditions. Association schemes provide a unified approach to many topics, for example combinatorial designs and coding theory. In algebra, association schemes generalize groups, and the theory of association schemes generalizes the character theory of linear representations of groups. Strongly regular graphs A strongly regular graph is defined as follows. Let G = (V,E) be a regular graph with v vertices and degree k. G is said to be strongly regular if there are also integers λ and μ such that: Every two adjacent vertices have λ common neighbours. Every two non-adjacent vertices have μ common neighbours. A graph of this kind is sometimes said to be a srg(v, k, λ, μ). Some authors exclude graphs which satisfy the definition trivially, namely those graphs which are the disjoint union of one or more equal-sized complete graphs, and their complements, the Turán graphs. Young tableaux A Young tableau (pl.: tableaux) is a combinatorial object useful in representation theory and Schubert calculus. It provides a convenient way to describe the group representations of the symmetric and general linear groups and to study their properties. Young tableaux were introduced by Alfred Young, a mathematician at Cambridge University, in 1900. They were then applied to the study of the symmetric group by Georg Frobenius in 1903. Their theory was further developed by many mathematicians, including Percy MacMahon, W. V. D. Hodge, G. de B. Robinson, Gian-Carlo Rota, Alain Lascoux, Marcel-Paul Schützenberger and Richard P. Stanley. Matroids A matroid is a structure that captures and generalizes the notion of linear independence in vector spaces. There are many equivalent ways to define a matroid, the most significant being in terms of independent sets, bases, circuits, closed sets or flats, closure operators, and rank functions. Matroid theory borrows extensively from the terminology of linear algebra and graph theory, largely because it is the abstraction of various notions of central importance in these fields. Matroids have found applications in geometry, topology, combinatorial optimization, network theory and coding theory. Finite geometries A finite geometry is any geometric system that has only a finite number of points. The familiar Euclidean geometry is not finite, because a Euclidean line contains infinitely many points. A geometry based on the graphics displayed on a computer screen, where the pixels are considered to be the points, would be a finite geometry. While there are many systems that could be called finite geometries, attention is mostly paid to the finite projective and affine spaces because of their regularity and simplicity. Other significant types of finite geometry are finite Möbius or inversive planes and Laguerre planes, which are examples of a general type called Benz planes, and their higher-dimensional analogs such as higher finite inversive geometries. Finite geometries may be constructed via linear algebra, starting from vector spaces over a finite field; the affine and projective planes so constructed are called Galois geometries. Finite geometries can also be defined purely axiomatically. Most common finite geometries are Galois geometries, since any finite projective space of dimension three or greater is isomorphic to a projective space over a finite field (that is, the projectivization of a vector space over a finite field). However, dimension two has affine and projective planes that are not isomorphic to Galois geometries, namely the non-Desarguesian planes. Similar results hold for other kinds of finite geometries. See also Algebraic graph theory Combinatorial commutative algebra Polyhedral combinatorics Algebraic Combinatorics (journal) Journal of Algebraic Combinatorics International Conference on Formal Power Series and Algebraic Combinatorics Citations Works cited . (Chapters from preliminary draft are available on-line.) Further reading External links
Algebraic combinatorics
Mathematics
1,140
5,112,046
https://en.wikipedia.org/wiki/1%20Centauri
1 Centauri, or i Centauri, is a yellow-white-hued binary star system in the southern constellation Centaurus. It can be seen with the naked eye, having an apparent visual magnitude of +4.23. Based upon an annual parallax shift of 51.54 mas as seen from Earth's orbit, it is located 51.5 light-years from the Sun. The system is moving closer to the Sun with a radial velocity of −21.5 km/s. Spectrographic images taken at the Cape Observatory between 1921 and 1923 showed this star has a variable radial velocity, which indicated this is a single-lined spectroscopic binary star system. The pair have an orbital period of 9.94 days and an eccentricity of about 0.2. The primary component has received a number of different stellar classifications. For example, Jaschek et al. (1964) lists F0V, F2III, F4III and F4IV, thus ranging in evolutionary state from an ordinary F-type main-sequence star to a giant star. More recently, Houk (1982) listed a class of F3 V, matching an ordinary main-sequence star that is generating energy through hydrogen fusion at its core. The NStars project gives it a classification of F2 V. References F-type main-sequence stars Spectroscopic binaries Centaurus Centauri, i CD-32 09603 Centauri, 1 00525.1 119756 067153 5168
1 Centauri
Astronomy
319
3,676,261
https://en.wikipedia.org/wiki/Uniformity%20of%20content
Uniformity of Content is a pharmaceutical analysis parameter for the quality control of capsules or tablets. Multiple capsules or tablets are selected at random and a suitable analytical method is applied to assay the individual content of the active ingredient in each capsule or tablet. The preparation complies if not more than one (all within limits) individual content is outside the limits of 85 to 115% of the average content and none is outside the limits of 75 to 125% of the average content. The preparation fails to comply with the test if more than 3 individual contents are outside the limits of 85 to 115% of the average content or if one or more individual contents are outside the limits of 75% to 125% of the average content. References External links Uniformity of content for single-dose preparations at WHO Uniformity of Dosage Units Quality control Pharmacology
Uniformity of content
Chemistry
174
36,772,891
https://en.wikipedia.org/wiki/Sensory-motor%20map
In robotics one often combines external sensory input and motor kinematics. A Sensory Motor-Map(SMM) is a map between the perception system of the robot and an action performed by the robot. The map gives the robot an understanding of how certain motor actions affect the perceived reality by relating the kinematics and dynamics used by the robot to achieve the external sensory input. See also Asomatognosia – Neurological disorder characterized as loss of recognition or awareness of part of the body Context awareness – Ability of software to adjust to the situation of users or devices Spatial contextual awareness – Software utilizing information from sensors, a user's activity, and maps Citations References Robot control
Sensory-motor map
Engineering
138
69,579,267
https://en.wikipedia.org/wiki/Holmium%20phosphide
Holmium phosphide is a binary inorganic compound of holmium and phosphorus with the chemical formula HoP. The compound forms dark crystals, is stable in air, and does not dissolve in water. Synthesis Heating powdered holmium and red phosphorus in an inert atmosphere or vacuum: Properties HoP belongs to the large class of NaCl-structured rare earth monopnictides. Ferromagnetic at low temperatures. HoP actively reacts with nitric acid. Uses The compound is a semiconductor used in high power, high frequency applications and in laser diodes. References Phosphides Holmium compounds Semiconductors Rock salt crystal structure
Holmium phosphide
Physics,Chemistry,Materials_science,Engineering
130
43,885,567
https://en.wikipedia.org/wiki/Emperor%20William%20Monument%20%28Porta%20Westfalica%29
The Emperor William Monument (), near the town of Porta Westfalica in the North Rhine-Westphalian county of Minden-Lübbecke, is a colossal monument above the Weser gorge of Porta Westfalica, the "Gateway to Westphalia". It was erected to honour the first German Emperor, William I (1797–1888), by the then Prussian Province of Westphalia between 1892 and 1896 and emerged against the background of a rising German national identity. The monument, which is around high, is classified as one of Germany's national monuments. The architect of this prominent monument was Bruno Schmitz and the sculptor was Kaspar von Zumbusch. Since 2008, the monument has formed part of the Road of Monuments. As a result of its dominant geographical site, it is the most important landmark of the town of Porta Westfalica and of northern East Westphalia. Location The Emperor William Monument is located at the extreme eastern end of the range of the Wiehen Hills on the eastern slopes of the Wittekindsberg ( above sea level). It towers above the great gorge of Porta Westfalica, through which the River Weser flows between the Wiehen Hills in the west and the Weser Hills in the east and between the towns of Porta Westfalica in the south and Minden in the north. It is here that the Weser breaks out of the German Central Uplands and winds across the North German Plain. The site used to be on the eastern border of the old Province of Westphalia with Porta Westfalica. The monument lies within the municipality of Barkhausen in the borough of Porta Westfalica. To the east, immediately below the southeastern stairway to the broad base of the monument, is a point that is and some way above the structure to the west is a high point, . From the foot of the monument () to the level () of the Weser at a point a few metres southwest of the point in the gorge where the Bundesstraße 61 (Portastraße) crosses the river there is a height difference of about . History Following the death of Emperor William I, monuments in his honour were built all over Prussia and in other major German cities. On 15 March 1889, the government of the Province of Westphalia voted with a small majority to erect their Emperor William Monument at the Westphalian Gate (Porta Westfalica). Caspar von Zumbusch of Herzebrock was commissioned to create the bronze statue of William I. A competition was held by the province to select an architect to design its setting and 58 were submitted. The prizegiving jury, which also came from Zumbusch awarded first prize to Berlin architect, Bruno Schmitz, whose design was chosen for the monument. Another first prize was awarded to Dresden architects, Richard Reuter and Theodor Fischer. Preparations for its construction began in summer 1892. The cost of the whole monument, including the purchase of land and the construction of the access road was estimated at 800,000 gold marks; in the end it cost 833,000 gold marks or, according to other sources, as much as 1,000,000 gold marks. Around of brickwork was built and of stairway. On 18 October 1896 the monument was inaugurated in the presence of Emperor William II and Empress Augusta Victoria during the opening ceremony in which between 15,000 and 20,000 people took part. In the Weimar Republic, the emperor had abdicated, so the monument's 25th anniversary passed without being marked. In 1921 a memorial tablet in honour of those who had fallen in the First World War was added. Not until 1926 did the monument become a focal point again when the national associations held a "German Day" here. In 2000, the monument itself was restored and it was planned to start the renovation of the surrounding terrace and café in 2013, preceded by replacement of areas of the terrace that had been destroyed in 1946 when British troops blew up a wartime underground production facility. From 2013 to 2018 the monument was restored and the area remodelled based on a new visitor concept by architect Peter Bastian from Münster. A restaurant and exhibition room were built in the ring terrace of the monument. After the rebuilding the monument was ceremonially re-opened on 8 July 2018. Views There are frequently good views from the Emperor William Monument of the town of Porta Westfalica, the North German Plain and of the Weser Hills on the other side of the gorge. References Literature Fred Kaspar: "Das Kaiserdenkmal an der Porta-Westfalica". (PDF; 1,4 MB) In: Denkmalpflege in Westfalen-Lippe. January 2007, , p. 19–21. "Die Preisbewerbung für das Kaiser Wilhelm Denkmal der Provinz Westfalen". In: Centralblatt der Bauverwaltung. 10th annual issue, 1890, No. 37, pp. 387–389 and No. 38, pp. 397–398. W. Fricke: Die Porta Westfalica und ihr Kaiser-Denkmal. (Commemorative publication for the inauguration of the monument). T. T. Bruns Verlag, Minden i. W., 1896. Küster: "Das Kaiser Wilhelm-Denkmal auf dem Wittekindberge an der Porta Westfalica". In: Centralblatt der Bauverwaltung. 16th annual issue, No. 43 (24 October 1896), , pp. 469–471. Reinhard Neumann: "Die Teilnahme der Minden-Ravensberger Posaunenchöre bei der Denkmalseinweihung an der Porta Westfalica". In: Jahrbuch für westfälische Kirchengeschichte, Vol. 100, 2005, pp. 305–329 External links 18 October 1896 – Inauguration of the Emperor William Monument at the Porta Westfalica portawestfalica.de – Das Kaiser-Wilhelm-Denkmal Photographs of the monument in the LWL Media Centre for Westphalia Architecture in Germany Bronze sculptures in Germany Buildings and structures completed in 1896 Buildings and structures in Minden-Lübbecke William I Culture of Prussia Monuments and memorials in Germany PortaWestfalica Sculptures of men in Germany Statues in Germany William I Wiehen Hills
Emperor William Monument (Porta Westfalica)
Physics,Mathematics
1,312
73,716,944
https://en.wikipedia.org/wiki/Mucor%20fragilis
Mucor fragilis is an endophytic fungus that causes the mold that can be found on grapes, pole beans, loquat, and on the roots of medicinal plants like Radix pseudostellariae. It belongs to the order Mucorales and phylum Mucoromycota. The observed symptoms showed the presence of fluffy and soft fungal mycelium with white to dark brown discoloration that deteriorated the beans and grapes quality. Taxonomy Mucor fragilis was described by Bainier in 1884. Description Mucor fragilis is described to have colonies that vary in color from white and reverse white to light gray. Mucor fragilis reproduces asexually and the sporangiophores are found as two types: simple and sympodially branched. Sporangiophores are mostly sympodially branched and grow to a width of around 6–12 μm and have a variable length. These sporangiophores have globose to subglobose, multispored, light yellow sporangia on them that measure around 24.5–49.5 by 22.5–48 μm. The columellae of Mucor fragilis can be globose to ellipsoid, pyriform, or some conical and can measure around 17.5–30 by 16–29.5 μm. The columellae collar is evident. Habitat and distribution This species is isolated from soil, insects, fruits, honeycomb, limestone, and plasticized polyvinyl chloride. It is distributed worldwide in places like Australia, Brazil, Bulgaria, China, Czech Republic, Germany, Greece, India, Iran, Kenya, Korea, Lithuania, Mexico, Pakistan, Poland, Portugal, Spain, Switzerland, and the United States. It is known in 3 of the 26 states in Brazil. Ecology Mucor fragilis is causing rot on lots of plant species in China and Pakistan. It has been found on grapes in five different locations of fruit markets after harvest in Pakistan and has causes a decline in the market value of grapes. Thyme oil has been found to potentially increase the shelf-life of these grapes. Also in Pakistan, Mucor fragilis has been causing rot in seychelles pole beans. This is the first time this fungus has been seen on pole beans and is causing urgency to control this fungus so it does not spread. In China, Mucor fragilis has been found on one of China's highly prized medicinal plants (Radix pseudostellariae). This is also the first report of Mucor fragilis causing rot on this plant and could result in loss of production of this medical plant. Mucor fragilis was found on deceased adult reproductive female brown widow spiders (Latrodectus geometricus) in North Central Florida. Spiders first showed signs of reduced foraging behavior and then started to die, this confirms that Mucor fragilis is pathogenic to these spiders. Mucor fragilis releases spores that can infect species like these spiders in multiple ways such as in their food or wound exposure. A study done on enzymes from Mucor fragilis grown on bovine blood provides a discussion on how this fungus be helpful to study structures on glycoconjugates containing certain glycoproteins. The view is that blood has nutritional value and is wasted when producing meat and the goal was to find a way to reuse this biomass and with a little more research, enzymes of Mucor fragilis may be the answer to the problem. Bioactive metabolites Mucor fragilis can produce simultaneously two bioactive metabolites, podophyllotoxin and kaempferol, as its host plant. This is very significant for this fungus as podophyllotoxin is in great demand due to its use as an anticancer and antivirus drug precursor. As the podophyllum plant is endangered, having Mucor fragilis produce podophyllotoxin can help increase the production of podophyllotoxin and also help the endangered podophyllum plant from going extinct. Mucor fragilis is an effective endophytic fungal elicitor as it has shown enhancement features of some primary and secondary metabolites in Salvia miltiorrhiza hairy roots. References Mucoraceae Fungi described in 1884 Fungus species
Mucor fragilis
Biology
904
70,548,926
https://en.wikipedia.org/wiki/Light%20in%20painting
Light in painting fulfills several objectives like, both plastic and aesthetic: on the one hand, it is a fundamental factor in the technical representation of the work, since its presence determines the vision of the projected image, as it affects certain values such as color, texture and volume; on the other hand, light has a great aesthetic value, since its combination with shadow and with certain lighting and color effects can determine the composition of the work and the image that the artist wants to project. Also, light can have a symbolic component, especially in religion, where this element has often been associated with divinity. The incidence of light on the human eye produces visual impressions, so its presence is indispensable for the capture of art. At the same time, light is intrinsically found in painting, since it is indispensable for the composition of the image: the play of light and shadow is the basis of drawing and, in its interaction with color, is the primordial aspect of painting, with a direct influence on factors such as modeling and relief. The technical representation of light has evolved throughout the history of painting, and various techniques have been created over time to capture it, such as shading, chiaroscuro, sfumato, or tenebrism. On the other hand, light has been a particularly determining factor in various periods and styles, such as Renaissance, Baroque, Impressionism, or Fauvism. The greater emphasis given to the expression of light in painting is called "luminism", a term generally applied to various styles such as Baroque tenebrism and impressionism, as well as to various movements of the late 19th century and early 20th century such as American, Belgian, and Valencian luminism. Optics Light (ultimately from Proto-Indo-European *lewktom, with the meaning "brightess") is an electromagnetic radiation with a wavelength between 380 nm and 750 nm, the part of the visible spectrum that is perceived by the human eye, located between infrared and ultraviolet radiation. It consists of massless elementary particles called photons, which move at a speed of 299 792 458 m/s in a vacuum, while in matter it depends on its refractive index . The branch of physics that studies the behavior and characteristics of light is optics. Light is the physical agent that makes objects visible to the human eye. Its origin can be in celestial bodies such as the Sun, the Moon, or the stars, natural phenomena such as lightning, or in materials in combustion, ignition, or incandescence. Throughout history, human beings have devised different procedures to obtain light in spaces lacking it, such as torches, candles, candlesticks, lamps or, more recently, electric lighting. Light is both the agent that enables vision and a visible phenomenon in itself, since light is also an object perceptible by the human eye. Light enables the perception of color, which reaches the retina through light rays that are transmitted by the retina to the optic nerve, which in turn transmits them to the brain by means of nerve impulses. The perception of light is a psychological process and each person perceives the same physical object and the same luminosity in a different way.Physical objects have different levels of luminance (or reflectance), that is, they absorb or reflect to a greater or lesser extent the light that strikes them, which affects the color, from white (maximum reflection) to black (maximum absorption). Both black and white are not considered colors of the conventional chromatic circle, but gradations of brightness and darkness, whose transitions make up the shadows. When white light hits a surface of a certain color, photons of that color are reflected; if these photons subsequently hit another surface they will illuminate it with the same color, an effect known as radiance — generally perceptible only with intense light. If that object is in turn the same color, it will reinforce its level of colored luminosity, i.e. its saturation. White light from the sun consists of a continuous spectrum of colors which, when divided, forms the colors of the rainbow: violet, indigo blue, blue, green, yellow, orange, and red. In its interaction with the Earth's atmosphere, sunlight tends to scatter the shorter wavelengths, i.e. the blue photons, which is why the sky is perceived as blue. On the other hand, at sunset, when the atmosphere is denser, the light is less scattered, so that the longer wavelengths, red, are perceived. Color is a specific wavelength of white light. The colors of the chromatic spectrum have different shades or tones, which are usually represented in the chromatic circle, where the primary colors and their derivatives are located. There are three primary colors: lemon yellow, magenta red, and cyan blue. If they are mixed, the three secondary colors are obtained: orange red, bluish violet, and green. If a primary and a secondary are mixed, the tertiary colors are obtained: greenish blue, orange yellow, etc. On the other hand, complementary colors are two colors that are on opposite sides of the chromatic circle (green and magenta, yellow and violet, blue and orange) and adjacent colors are those that are close within the circle (yellow and green, red and orange). If a color is mixed with an adjacent color, it is shaded, and if it is mixed with a complementary color, it is neutralized (darkened). Three factors are involved in the definition of color: hue, the position within the chromatic circle; saturation, the purity of the color, which is involved in its brightness – the maximum saturation is that of a color that has no mixture with black or its complementary; and value, the level of luminosity of a color, increasing when mixed with white and decreasing when mixed with black or a complementary. The main source of light is the Sun and its perception can vary according to the time of day: the most normal is mid-morning or mid-afternoon light, generally blue, clear and diaphanous, although it depends on atmospheric dispersion and cloudiness and other climatic factors; midday light is whiter and more intense, with high contrast and darker shadows; dusk light is more yellowish, soft and warm; sunset light is orange or red, low contrast, with intense bluish shadows; evening light is a darker red, dimmer light, with weaker shadows and contrast (the moment known as alpenglow, which occurs in the eastern sky on clear days, gives pinkish tones); the light of cloudy skies depends on the time of day and the degree of cloudiness, is a dim and diffuse light with soft shadows, low contrast and high saturation (in natural environments there can be a mixture of light and shadow known as "mottled light"); finally, night light can be lunar or some atmospheric refraction of sunlight, is diffuse and dim (in contemporary times there is also light pollution from cities). We must also point out the natural light that filters indoors, a diffuse light of lower intensity, with a variable contrast depending on whether it has a single origin or several (for example, several windows), as well as a coloring also variable, depending on the time of day, the weather or the surface on which it is reflected. An outstanding interior light is the so-called "north light", which is the light that enters through a north-facing window, which does not come directly from the sun -always located to the south- and is therefore a soft and diffuse, constant and homogeneous light, much appreciated by artists in times when there was no adequate artificial lighting. As for artificial light, the main ones are: fire and candles, red or orange; electric, yellow or orange – generally tungsten or wolfram – it can be direct (focal) or diffused by lamp shades; fluorescent, greenish; and photographic, white (flash light). Logically, in many environments there can be mixed light, a combination of natural and artificial light. The visible reality is made up of a play of light and shadow: the shadow is formed when an opaque body obstructs the path of the light. In general, there is a ratio between light and shadow whose gradation depends on various factors, from lighting to the presence and placement of various objects that can generate shadows; however, there are conditions in which one of the two factors can reach the extreme, as in the case of snow or fog or, conversely, at night. We speak of high key lighting when white or light tones predominate, or low key lighting if black or dark tones predominate. Shadows can be of shape (also called "self shadows") or of projection ("cast shadows"): the former are the shaded areas of a physical object, that is, the part of that object on which light does not fall; the latter are the shadows cast by these objects on some surface, usually the ground. Self shadows define the volume and texture of an object; cast shadows help define space. The lightest part of the shadow is the "umbra" and the darkest part is the "penumbra". The shape and appearance of the shadow depends on the size and distance of the light source: the most pronounced shadows are from small or distant sources, while a large or close source will give more diffuse shadows. In the first case, the shadow will have sharp edges and the darker area (penumbra) will occupy most of it; in the second, the edge will be more diffuse and the umbra will predominate. A shadow can receive illumination from a secondary source, known as "fill light". The color of a shadow is between blue and black, and also depends on several factors, such as light contrast, transparency and translucency. The projection of shadows is different if they come from natural or artificial light: with natural light the beams are parallel and the shadow adapts both to the terrain and to the various obstacles that may intervene; with artificial light the beams are divergent, with less defined limits, and if there are several light sources, combined shadows may be produced. The reflection of light produces four derived phenomena: glints, which are reflections of the light source, be it the Sun, artificial lights or incidental sources such as doors and windows; glares, which are reflections produced by illuminated bodies as a reflective screen, especially white surfaces; color reflections, produced by the proximity between various objects, especially if they are luminous; and image reflections, produced by polished surfaces, such as mirrors or water. Another phenomenon produced by light is transparency, which occurs in bodies that are not opaque, with a greater or lesser degree depending on the opacity of the object, from total transparency to varying degrees of translucency. Transparency generates filtered light, a type of luminosity that can also be produced through curtains, blinds, awnings, various fabrics, pergolas and arbors, or through the foliage of trees. Pictorial representation of light In artistic terminology, "light" is the point or center of light diffusion in the composition of a painting, or the luminous part of a painting in relation to the shadows. This term is also used to describe the way a painting is illuminated: zenithal or plumb light (vertical rays), high light (oblique rays), straight light (horizontal rays), workshop or studio light (artificial light), etc. The term "accidental light" is also used to refer to light not produced by the Sun, which can be either moonlight or artificial light from candles, torches, etc. The light can come from different directions, which according to its incidence can be differentiated between: "lateral", when it comes from the side, it is a light that highlights more the texture of the objects; "frontal", when it comes from the front, it eliminates the shadows and the sensation of volume; "zenithal", a vertical light of higher origin than the object, it produces a certain deformation of the figure; "contrapicado", vertical light of lower origin, it deforms the figure in an exaggerated way; and "backlight", when the origin is behind the object, thus darkening and diluting its silhouette. In relation to the distribution of light in the painting, it can be: "homogeneous", when it is distributed equally; "dual", in which the figures stand out against a dark background; or "insertive", when light and shadows are interrelated. According to its origin, light can be intrinsic ("own or autonomous light"), when the light is homogeneous, without luminous effects, directional lights or contrasts of lights and shadows; or extrinsic ("illuminating light"), when it presents contrasts, directional lights and other objective sources of light. The first occurred mainly in Romanesque and Gothic art, and the second especially in the Renaissance and Baroque. In turn, the illuminating light can occur in different ways: "focal light", when it directly presents a light-emitting object ("tangible light") or comes from an external source that illuminates the painting ("intangible light"); "diffuse light", which blurs the contours, as in Leonardo's sfumato; "real light", which aims to realistically capture sunlight, an almost utopian attempt in which artists such as Claude of Lorraine, J. M. W. Turner or the impressionist artists were especially employed; and "unreal light", which has no natural or scientific basis and is closer to a symbolic light, as in the illumination of religious figures. As for the artist's intention, light can be "compositional", when it helps the composition of the painting, as in all the previous cases; or "conceptual light", when it serves to enhance the message, for example by illuminating a certain part of the painting and leaving the rest in semi-darkness, as Caravaggio used to do. In terms of its origin, light can be "natural ambient light", in which no shadows of figures or objects appear, or "projected light", which generates shadows and serves to model the figures. It is also important to differentiate between source and focus of light: the source of light in a painting is the element that radiates the light, be it the sun, a candle or any other; the focus of light is the part of the painting that has the most luminosity and radiates it around the painting. On the other hand, in relation to the shadow, the interrelation between light and shadow is called "chiaroscuro"; if the dark area is larger than the illuminated one, it is called "tenebrism".Light in painting plays a decisive role in the composition and structuring of the painting. Unlike in architecture and sculpture, where light is real, the light of the surrounding space, in painting light is represented, so it responds to the will of the artist both in its physical and aesthetic aspect. The painter determines the illumination of the painting, that is to say, the origin and incidence of the light, which marks the composition and expression of the image. In turn, the shadow provides solidity and volume, while it can generate dramatic effects of various kinds. In the pictorial representation of light it is essential to distinguish its nature (natural, artificial) and to establish its origin, intensity and chromatic quality. Natural light depends on various factors, such as the season of the year, the time of day (auroral, diurnal, twilight or nocturnal light – from the Moon or stars) or the weather. Artificial light, on the other hand, differs according to its origin: a candle, a torch, a fluorescent, a lamp, neon lights, etc. As for the origin, it can be focused or act in a diffuse way, without a determined origin. The chromatism of the image depends on the light, since depending on its incidence an object can have different tonalities, as well as the reflections, ambiances and shadows projected. In an illuminated image the color is considered saturated at the correct level of illumination, while the color in shadow will always have a darker tonal value and will be the one that determines the relief and volume. Light is linked to space, so in painting it is intimately linked to perspective, the way of representing a three-dimensional space in a two-dimensional support such as painting. Thus, in linear perspective, light fulfills the function of highlighting objects, of generating volume, through modeling, in the form of luminous gradations; while in aerial perspective, the effects of light are sought as they are perceived by the spectator in the environment, as another element present in the physical reality represented. The light source can be present in the painting or not, it can have a direct or indirect origin, internal or external to the painting. The light defines the space through the modeling of volumes, which is achieved with the contrast between light and shadow: the relationship between the values of light and shadow defines the volumetric characteristics of the form, with a scale of values that can range from a soft fade to a hard contrast. Spatial limits can be objective, when they are produced by people, objects, architectures, natural elements and other factors of corporeality; or subjective, when they come from sensations such as atmosphere, depth, a hollow, an abyss, etc. In human perception, light creates closeness and darkness creates remoteness, so that a light-darkness gradient gives a sensation of depth.Aspects such as contrast, relief, texture, volume, gradients or the tactile quality of the image depend on light. The play of light and shadow helps to define the location and orientation of objects in space. For their correct representation, their shape, density and extension, as well as their differences in intensity, must be taken into account. It should also be taken into account that, apart from its physical qualities, light can generate dramatic effects and give the painting a certain emotional atmosphere. Contrast is a fundamental factor in painting; it is the language with which the image is shaped. There are two types of contrast: the "luminous", which can be by chiaroscuro (light and shadow) or by surface (a point of light that shines brighter than the rest); and the "chromatic", which can be tonal (contrast between two tones) or by saturation (a bright color with a neutral one). Both types of contrast are not mutually exclusive, in fact they coincide in the same image most of the time. Contrast can have different levels of intensity and its regulation is the artist's main tool to achieve the appropriate expression for his work. From the contrast between light and shadow depends the tonal expression that the artist wants to give to his work, which can range from softness to hardness, which gives a lesser or greater degree of dramatization. Backlighting, for example, is one of the resources that provide greater drama, since it produces elongated shadows and darker tones. The correspondence between light and shadow and color is achieved through tonal evaluation: the lightest tones are found in the most illuminated areas of the painting and the darkest in those that receive less illumination. Once the artist establishes the tonal values, he chooses the most appropriate color ranges for their representation. Colors can be lightened or darkened until the desired effect is achieved: to lighten a color, lighter related colors – such as groups of warm or cool colors – are added to it, as well as amounts of white until the right tone is found; to darken, related dark colors and some blue or shadow are added. In general, the shade is made by mixing a color with a darker shade, plus blue and a complementary of the proper color (such as yellow and dark blue, red and primary blue or magenta and green). The light and chromatic harmony of a painting depends on color, i.e. the relationship between the parts of a painting to create cohesion. There are several ways to harmonize: it can be done through "monochrome and tone dominant melodic ranges", with a single color as a base to which the value and tone is changed; if the value is changed with white or black it is a monochrome, while if the tone is changed it is a simple melodic range: for example, taking red as the dominant tone can be shaded with various shades of red (vermilion, cadmium, carmine) or orange, pink, violet, maroon, salmon, warm gray, etc. Another method is the "harmonic trios", which consists of combining three colors equidistant from each other on the chromatic circle; there can also be four, in which case we speak of "quaternions". Another way is the combination of "warm and cool thermal ranges": warm colors are for example red, orange, purple and yellowish green, as well as black; cool colors are blue, green and violet, as well as white (this perception of color with respect to its temperature is subjective and comes from Goethe's Theory of Colors). It is also possible to harmonize between "complementary colors", which is the one that produces the greatest chromatic contrast. Finally, "broken ranges" consist of neutralization by mixing primary colors and their complementary colors, which produces intense luminous effects, since the chromatic vibration is more subtle and the saturated colors stand out more. Techniques The quality and appearance of the luminous representation is in many cases linked to the technique used. The expression and the different light effects of a work depend to a great extent on the different techniques and materials used. In drawing, whether in pencil or charcoal, the effects of light are achieved through the black-white duality, where white is generally the color of the paper (there are colored pencils, but they produce little contrast, so they are not very suitable for chiaroscuro and light effects). Pencil is usually worked with line and hatching, or by means of blurred spots. Charcoal allows the use of gouache and chalk or white chalk to add touches of light, as well as sanguine or sepia. Another monochrome technique is Indian ink, which generates very violent chiaroscuro, without intermediate values, making it a very expressive medium. Oil painting consists of dissolving the colors in an oily binder (linseed, walnut, almond or hazelnut oil; animal oils), adding turpentine to make it dry better. The oil painting is the one that best allows to value the light effects and the chromatic tones. It is a technique that produces vivid colors and intense effects of brightness and brilliance, and allows a free and fresh stroke, as well as a great richness of textures. On the other hand, thanks to its long permanence in a fluid state, it allows for subsequent corrections.For its application, brushes, spatulas or scrapers can be used, allowing multiple textures, from thin layers and glazes to thick fillings, which produce a denser light. Pastel painting is made with a pigment pencil of various mineral colors, with binders (kaolin, gypsum, gum arabic, fig latex, fish glue, candi sugar, etc.), kneaded with wax and Marseilles soap and cut into sticks. The color should be spread with a smudger, a cylinder of leather or paper used to smudge the color strokes. Pastel combines the qualities of drawing and painting, and brings freshness and spontaneity. Watercolor is a technique made with transparent pigments diluted in water, with binders such as gum arabic or honey, using the white of the paper itself. Known since ancient Egypt, it has been a technique used throughout the ages, although with more intensity during the 18th and 19th centuries. As it is a wet technique, it provides great transparency, which highlights the luminous effect of the white color. Generally, the light tones are applied first, leaving spaces on the paper for the pure white; then the dark tones are applied. In acrylic paint, a plastic binder is added to the colorant, which produces a fast drying and is more resistant to corrosive agents. The speed of drying allows the addition of multiple layers to correct defects and produces flat colors and glazes. Acrylic can be worked by gradient, blurred or contrasted, by flat spots or by filling the color, as in the oil technique. Genres Depending on the pictorial genre, light has different considerations, since its incidence is different in interiors than in exteriors, on objects than on people. In interiors, light generally tends to create intimate environments, usually a type of indirect light filtered through doors or windows, or filtered by curtains or other elements. In these spaces, private scenes are usually developed, which are reinforced by contrasts of light and shadow, intense or soft, natural or artificial, with areas in semi-darkness and atmospheres influenced by gravitating dust and other effects caused by these spaces. A separate genre of interior painting is naturaleza muerta or "still life", which usually shows a series of objects or food arranged as in a sideboard. In these works the artist can manipulate the light at will, generally with dramatic effects such as side lights, frontal lights, zenithal lights, back lights, back-lights, etc. The main difficulty consists in the correct evaluation of the tones and textures of the objects, as well as their brightness and transparency depending on the material. In exteriors, the main genre is landscape, perhaps the most relevant in relation to light in that its presence is fundamental, since any exterior is enveloped in a luminous atmosphere determined by the time of day and the weather and environmental conditions. There are three main types of landscapes: landscape, seascape, and skyscape. The main challenge for the artist in these works is to capture the precise tone of the natural light according to the time of day, the season of the year, the viewing conditions – which can be affected by phenomena such as cloud cover, rain or fog – and an infinite number of variables that can occur in a medium as volatile as the landscape. On numerous occasions artists have gone out to paint in nature to capture their impressions first hand, a working method known by the French term en plen air ("in the open air", equivalent to "outdoors"). There is also the variant of the urban landscape, frequent especially since the 20th century, in which a factor to take into account is the artificial illumination of the cities and the presence of neon lights and other types of effects; in general, in these images the planes and contrasts are more differentiated, with hard shadows and artificial and grayish colors. Light is also fundamental for the representation of the human figure in painting, since it affects the volume and generates different limits according to the play of light and shadow, which delimits the anatomical profile. Light allows us to nuance the surface of the body, and provides a sensation of smoothness and softness to the skin. The focus of the light is important, since its direction influences the general contour of the figure and the illumination of its surroundings: for example, frontal light makes the shadows disappear, attenuating the volume and the sensation of depth, while emphasizing the color of the skin. On the other hand, a partially lateral illumination causes shadows and gives relief to the volumes, and if it is from the side, the shadow covers the opposite side of the figure, which appears with an enhanced volume. On the other hand, in backlighting the body is shown with a characteristic halo around its contour, while the volume acquires a weightless sensation. With overhead lighting, the projection of shadows blurs the relief and gives a somewhat ghostly appearance, just as it does when illuminated from below – although the latter is rare. A determining factor is that of the shadows, which generate a series of contours apart from the anatomical ones that provide drama to the image. Together with the luminous reflections, the gradation of shadows generates a series of effects of great richness in the figure, which the artist can exploit in different ways to achieve different results of greater or lesser effect. It should also be taken into account that direct light or shadow on the skin modifies the color, varying the tonality from the characteristic pale pink to gray or white. The light can also be filtered by objects that get in its path (such as curtains, fabrics, vases or various objects), which generates different effects and colors on the skin. In relation to the human being, the portrait genre is characteristic, in which light plays a decisive role in the modeling of the face. Its elaboration is based on the same premises as those of the human body, with the addition of a greater demand in the faithful representation of the physiognomic features and even the need to capture the psychology of the character. The drawing is essential to model the features according to the model and, from there, light and color are again the vehicle of translation of the visual image to its representation on the canvas. In the 20th century, abstraction emerged as a new pictorial language, in which painting is reduced to non-figurative images that no longer describe reality, but rather concepts or sensations of the artist himself, who plays with form, color, light, matter, space and other elements in a totally subjective way and not subject to conventionalisms. Despite the absence of concrete images of the surrounding reality, light is still present on numerous occasions, generally contributing luminosity to the colors or creating chiaroscuro effects by contrasting tonal values. Chronological factor Another aspect in which light is a determining factor is in time, in the representation of chronological time in painting. Until the Renaissance, artists did not represent a specific time in painting and, in general, the only difference in light was between exterior and interior lights. In many occasions it is difficult to identify the specific time of day in a work, since neither the direction of the light nor its quality nor the dimension of the shadows are decisive elements to recognize a certain time of day. Night was rarely represented until practically Mannerism and, in the cases in which a nocturnal atmosphere was used, it was because the narrative required it or because of some symbolic aspect: in Giotto's The Annunciation to the Shepherds or in Ambrogio Lorenzetti's Annunciation, the nocturnal atmosphere contributes to accentuate the halo of mystery surrounding the birth of Christ; in Uccello's Saint George and the Dragon, night represents evil, the world in which the dragon lives. On the other hand, even in narrative themes that take place at night, such as the Last Supper or the supper at Emmaus, this factor is sometimes deliberately avoided, as in Andrea del Sarto's Last Supper, set in daylight. Generally, the chronological setting of a scene has been linked to its narrative correlate, albeit in an approximate manner and with certain licenses on the part of the artist. Practically until the 19th century, it was not until the industrial civilization, thanks to the advances in artificial lighting, that a complete and exact use of the entire time zone was achieved, thanks to the advances in artificial illumination. But just as in the contemporary age time has had a more realistic component, in the past it was more of a narrative factor, accompanying the action represented: dawn was a time of travel or hunting; noon, of action or its subsequent rest; dusk, of return or reflection; night was sleep, fear or adventure, or fun and passion; birth was morning, death was night. The temporal dimension began to gain relevance in the 17th century, when artists such as Claude Lorrain and Salvator Rosa began to detach landscape painting from a narrative context and to produce works in which the protagonist was nature, with the only variations being the time of day or the season of the year. This new conception developed with 18th-century Vedutism and 19th-century Romantic landscape, and culminated with Impressionism. The first light of the day is that of dawn, sunrise or aurora (sometimes the aurora, which would be the first brightness of the sky, is differentiated from dawn, which would correspond to sunrise). Until the 17th century, dawn appeared only in small pieces of landscape, usually behind a door or a window, but was never used to illuminate the foreground. The light of dawn generally has a spherical effect, so until the appearance of Leonardo's aerial perspective it was not widely used. In his Dictionary of the Fine Arts of Design (1797), Francesco Milizia states that: For Milizia, the light of dawn was the most suitable for the representation of landscapes. Noon and the hours immediately before and after have always been a stable frame for an objective representation of reality, although it is difficult to pinpoint the exact moment in most paintings depending on the different light intensities. On the other hand, the exact noon was discouraged by its extreme refulgence, to the point that Leonardo advised that: Milizia also points out that: Most art treatises advised the afternoon light, which was the most used especially from the Renaissance to the 18th century. Vasari advised to place the sun to the east because "the figure that is made has a great relief and great goodness and perfection is achieved". In the early days of modern painting, the sunset used to be circumscribed to a celestial vault characterized by its reddish color, without an exact correspondence with the illumination of figures and objects. It was again with Leonardo that a more naturalistic study of twilight began, pointing out in his notes that: For Milizia this moment is risky, since "the more splendid these accidents are (the flaming twilight is always an excess), the more they must be observed to represent them well". Finally, the night has always been a singularity within painting, to the point of constituting a genre of its own: the nocturne. In these scenes the light comes from the Moon, the stars or from some type of artificial illumination (bonfires, torches, candles or, more recently, gas or electric light). The justification for a night scene has generally been given from iconographic themes occurring in this time period. In the 14th century painting began to move away from the symbolic and conceptual content of medieval art in search of a figurative content based on a more objective spatio-temporal axis. Renaissance artists were refractory to the nocturnal setting, since their experimentation in the field of linear perspective required an objective and stable frame in which full light was indispensable. Thus, Lorenzo Ghiberti stated that "it is not possible to be seen in darkness" and Leonardo wrote that "darkness means complete deprivation of light". Leonardo advised a night scene only with the illumination of a fire, as a mere artifice to make a night scene diurnal. However, Leonardo's sfumato opened a first door to a naturalistic representation of the night, thanks to the chromatic decrease in the distance in which the bluish white of Leonardo's luminous air can become a bluish black for the night: just as the first creates an effect of remoteness, the second provokes closeness, the dilution of the background in the gloom. This tendency will have its climax in baroque tenebrism, in which darkness is used to add drama to the scene and to emphasize certain parts of the painting, often with a symbolic aspect. On the other hand, in the 17th century the representation of the night acquired a more scientific character, especially thanks to the invention of the telescope by Galileo and a more detailed observation of the night sky. Finally, advances in artificial lighting in the 19th century boosted the conquest of nighttime, which became a time for leisure and entertainment, a circumstance that was especially captured by the Impressionists. Symbology Light has had on numerous occasions throughout the history of painting an aesthetic component, which identifies light with beauty, as well as a symbolic meaning, especially related to religion, but also with knowledge, good, happiness and life, or in general the spiritual and immaterial. Sometimes the light of the Sun has been equated with inspiration and imagination, and that of the Moon with rational thought. In contrast, shadows and darkness represent evil, death, ignorance, immorality, misfortune or secrecy. Thus, many religions and philosophies throughout history have been based on the dichotomy between light and darkness, such as Ahura Mazda and Ahriman, yin and yang, angels and demons, spirit and matter, and so on. In general, light has been associated with the immaterial and spiritual, probably because of its ethereal and weightless aspect, and that association has often been extended to other concepts related to light, such as color, shadow, radiance, evanescence, etc. The identification of light with a transcendent meaning comes from antiquity and probably existed in the minds of many artists and religious people before the idea was written down. In many ancient religions the deity was identified with light, such as the Semitic Baal, the Egyptian Ra or the Iranian Ahura Mazda. Primitive peoples already had a transcendental concept of light – the so-called "metaphor of light" – generally linked to immortality, which related the afterlife to starlight. Many cultures sketched a place of infinite light where the souls rested, a concept also picked up by Aristotle and various Fathers of the Church such as Saint Basil and Saint Augustine. On the other hand, many religious rites were based on "illumination" to purify the soul, from ancient Babylon to the Pythagoreans. In Greek mythology Apollo was the god of the Sun and has often been depicted in art within a disk of light. On the other hand, Apollo was also the god of beauty and the arts, a clear symbolism between light and these two concepts. Also related to light is the goddess of dawn, Eos (Aurora in Roman mythology). In Ancient Greece, light was synonymous with life and was also related to beauty. Sometimes the fluctuation of light was related to emotional changes, as well as to intellectual capacity. On the other hand, the shadow had a negative component, it was related to the dark and hidden, to evil forces, such as the spectral shadows of Tartarus. The Greeks also related the sun to "intelligent light" (φῶς νοετόν), a driving principle of the movement of the universe, and Plato drew a parallel between light and knowledge. The ancient Romans distinguished between lux (luminous source) and lumen (rays of light emanating from that source), terms they used according to the context: thus, for example, lux gloriae or lux intelligibilis, or lumen naturale or lumen gratiae. In Christianity, God is also often associated with light, a tradition that goes back to the philosopher Pseudo-Dionysius Areopagite (On the Celestial Hierarchy, On the Divine Names), who adapted a similar one from Neoplatonism. For this 5th century author, "Light derives from Good and is the image of Goodness". Later, in the 9th century, John Scotus Erigena defined God as "the father of lights". Already the Bible begins with the phrase "let there be light" (Ge 1:3) and points out that "God saw that the light was good" (Ge 1:4). This "good" had in Hebrew a more ethical sense, but in its translation into Greek the term καλός (kalós, "beautiful") was used, in the sense of kalokagathía, which identified goodness and beauty; although later in the Latin Vulgate a more literal translation was made (bonum instead of pulchrum), it remained fixed in the Christian mentality the idea of the intrinsic beauty of the world as the work of the Creator. On the other hand, the Holy Scriptures identify light with God, and Jesus goes so far as to affirm: "I am the light of the world, he who follows me will not walk in darkness, for he will have the light of life" (John 8:12). This identification of light with divinity led to the incorporation in Christian churches of a lamp known as "eternal light", as well as the custom of lighting candles to remember the dead and various other rites. Light is also present in other areas of the Christian religion: the Conception of Jesus in Mary is realized in the form of a ray of light, as seen in numerous representations of the Annunciation; likewise, it represents the Incarnation, as expressed by Pseudo-Saint Bernard: "as the splendor of the sun passes through glass without breaking it and penetrates its solidity in its impalpable subtlety, without opening it when it enters and without breaking it when it leaves, so the Word God penetrates Mary's womb and comes forth from her womb intact." This symbolism of light passing through glass is the same concept that was applied to Gothic stained glass, where light symbolizes divine omnipresence. Another symbolism related to light is that which identifies Jesus with the Sun and Mary as the Dawn that precedes him. In addition to all this, in Christianity light can also signify truth, virtue and salvation. In patristics, light is a symbol of eternity and the heavenly world: according to Saint Bernard, souls separated from the body will be "plunged into an immense ocean of eternal light and luminous eternity". On the other hand, in ancient Christianity, baptism was initially called "illumination". In Orthodox Christianity, light is, more than a symbol, a "real aspect of divinity," according to Vladimir Lossky. A reality that can be apprehended by the human being, as expressed by Saint Simeon the New Theologian: Because of the opposition of light and darkness, this element has also been used on occasions as a repeller of demons, so that light has often been represented in various acts and ceremonies such as circumcision, baptisms, weddings or funerals, in the form of candles or fires. In Christian iconography, light is also present in the halos of the saints, which used to be made – especially in medieval art – with a golden nimbus, a circle of light placed around the heads of saints, angels and members of the Holy Family. In Fra Angelico's The Annunciation, in addition to the halo, the artist placed rays of light radiating from the figure of the archangel Gabriel, to emphasize his divinity, the same resource he uses with the dove symbolizing the Holy Spirit. On other occasions, it is God himself who is represented in the form of rays of sunlight, as in The Baptism of Christ (1445) by Piero della Francesca. The rays can also signify God's wrath, as in The Tempest (1505) by Giorgione. On other occasions light represents eternity or divinity: in the vanitas genre, beams of light used to focus on objects whose transience was to be emphasized as a symbol of the ephemerality of life, as in Vanities (1645) by Harmen Steenwijck, where a powerful beam of light illuminates the skull in the center of the painting. Between the 14th and 15th centuries Italian painters used supernatural-looking lights in night scenes to depict miracles: for example, in the Annunciation to the Shepherds by Taddeo Gaddi (Santa Croce, Florence) or in the Stigmatization of Saint Francis by Gentile da Fabriano (1420, private collection). In the 16th century, supernatural lights with brilliant effects were also used to point out miraculous events, as in Matthias Grünewald's Risen Christ (1512-1516, Isenheim altar, Museum Unterlinden, Colmar) or in Titian's Annunciation (1564, San Salvatore, Venice). In the following century, Rembrandt and Caravaggio identified light in their works with divine grace and as an agent of action against evil. The Baroque was the period in which light became more symbolic: in medieval art the luminosity of the backgrounds, of the halos of the saints and other objects – generally made with gold leaf – was an attribute that did not correspond to real luminosity, while in the Renaissance it responded more to a desire for experimentation and aesthetic delight; Rembrandt was the first to combine both concepts, the divine light is a real, sensory light, but with a strong symbolic charge, an instrument of revelation. Between the 17th and 18th centuries, mystical theories of light were abandoned as philosophical rationalism gained ground. From transcendental or divine light, a new symbolism of light evolved that identified it with concepts such as knowledge, goodness or rebirth, and opposed it to ignorance, evil and death. Descartes spoke of an "inner light" capable of capturing the "eternal truths", a concept also taken up by Leibniz, who distinguished between lumière naturelle (natural light) and lumière révélée (revealed light). In the 19th century light was related by the German Romantics (Friedrich Schlegel, Friedrich Schelling, Georg Wilhelm Friedrich Hegel) to nature, in a pantheistic sense of communion with nature. For Schelling, light was a medium in which the "universal soul" (Weltseele) moved. For Hegel, light was the "ideality of matter", the foundation of the material world. Between the 19th and 20th centuries, a more scientific view of light prevailed. Science had been trying to unravel the nature of light since the early Modern Age, with two main theories: the corpuscular theory, defended by Descartes and Newton; and the wave theory, defended by Christiaan Huygens, Thomas Young and Augustin-Jean Fresnel. Later, James Clerk Maxwell presented an electromagnetic theory of light. Finally, Albert Einstein brought together the corpuscular and wave theories. Light can also have a symbolic character in landscape painting: in general, dawn and the passage from night to day represent the divine plan – or cosmic system – that transcends the simple will of the human being; dawn also symbolizes the renewal and redemption of Christ. On other occasions, the sun and the moon have been associated with various vital forces: thus, the sun and the day are associated with the masculine, the vital force and energy; and the moon and the night with the feminine, rest, sleep and spirituality, sometimes even death. In other religions light also has a transcendent meaning: in Buddhism it represents truth and the overcoming of matter in the ascent to nirvana. In Hinduism it is synonymous with wisdom and the spiritual understanding of participation with divinity (atman); it is also the manifestation of Krishna, the "Lord of Light". In Islam it is the sacred name Nûr. According to the Koran (24:35), "Allah is the light of the heavens and the earth. Light upon light! Allah guides to his light whomever he wills". In the Zohar of the Jewish Kabbalah the primordial light Or (or Awr) appears, and points out that the universe is divided between the empires of light and darkness; also in Jewish synagogues there is usually a lamp of "eternal light" or ner tamid. Finally, in Freemasonry, the search for light is considered the ascent to the various Masonic degrees; some of the Masonic symbols, such as the compass, the bevel and the holy book, are called "great lights"; also the principal Masonic officials are called "lights". On the other hand, initiation into Freemasonry is called "receiving the light". History The use of light is intrinsic to painting, so it has been present directly or indirectly since prehistoric times, when cave paintings sought light and relief effects by taking advantage of the roughness of the walls where these scenes were represented. However, serious attempts at greater experimentation in the technical representation of light did not take place until classical Greco-Roman art: Francisco Pacheco, in El arte de la pintura (1649), points out that: "adumbration was invented by Surias, Samian, covering or staining the shadow of a horse, looked at in the sunlight". On the other hand, Apollodorus of Athens is credited with the invention of chiaroscuro, a procedure of contrast between light and shadow to produce effects of luminous reality in a two-dimensional representation such as painting. The effects of light and shadow were also developed by Greek scenographers in a technique called skiagraphia, consisting of the contrast between black and white to create contrast, to the point that they were called "shadow painters". The first scientific studies on light also emerged in Greece: Aristotle stated in relation to colors that they are "mixtures of different forces of sunlight and the light of fire, air and water", as well as that "darkness is due to the deprivation of light". One of the most famous Greek painters was Apelles, one of the pioneers in the representation of light in painting. Pliny said of Apelles that he was the only one who "painted what cannot be painted, thunder, lightning and thunderbolts". Another outstanding painter was Nicias of Athens, of whom Pliny praised the "care he took with light and shade to achieve the appearance of relief". With the emergence of landscape painting, a new method was developed to represent distance through gradations of light and shadow, contrasting more the plane closest to the viewer and progressively blurring with distance. These early landscape painters created the modeling through shades of light and shadow, without mixing the colors in the palette. Claudius Ptolemy explained in his Optics how painters created the illusion of depth through distances that seemed "veiled by air". In general, the strongest contrasts were made in the areas closest to the observer and progressively reduced towards the background. This technique was picked up by early Christian and Byzantine art, as seen in the apsidal mosaic of Sant'Apollinare in Classe, and even reached as far as India, as denoted in the Buddhist murals of Ajantā. In the 5th century the philosopher John Philoponus, in his commentary on Aristotle's Meteorology, outlined a theory on the subjective effect of light and shadow in painting, known today as "Philoponus' rule": This effect was already known empirically by ancient painters. Cicero was of the opinion that painters saw more than normal people in umbris et eminentia ("in shadows and eminences"), that is, depth and protrusion. And Pseudo-Longinus – in his work On the Sublime – said that "although the colors of shadow and light are on the same plane, side by side, the light jumps immediately into view and seems not only to stand out but actually to be closer." Hellenistic art was fond of light effects, especially in landscape painting, as denoted in the stuccoes of La Farnesina. Chiaroscuro was widely used in Roman painting, as denoted in the illusory architectures of the frescoes of Pompeii, although it disappeared during the Middle Ages. Vitruvius recommended as more suitable for painting the northern light, being more constant due to its low mutability in tone. Later, in Paleochristian art, the taste for contrasts between light and shadow became evident – as can be seen in Christian sepulchral paintings and in the mosaics of Santa Pudenciana and Santa María la Mayor – in such a way that this style has sometimes been called "ancient impressionism". Byzantine art inherited the use of illusionistic touches of light that were used in Pompeian art, but just as in the original its main function was naturalistic, here it is already a rhetorical formula far removed from the representation of reality. In Byzantine art, as well as in Romanesque art, which it powerfully influenced, the luminosity and splendor of shines and reflections, especially of gold and precious stones, were more valued, with a more aesthetic than pictorial component, since these shines were synonymous of beauty, of a type of beauty more spiritual than material. These briils were identified with the divine light, as did Abbot Suger to justify his expenditure on jewels and precious materials. Both Greek and Roman art laid the foundations of the style known as classicism, whose main premises are truthfulness, proportion and harmony. Classicist painting is fundamentally based on drawing as a preliminary design tool, on which the pigment is applied taking into account a correct proportion of chromaticism and shading. These precepts laid the foundations of a way of understanding art that has lasted throughout history, with a series of cyclical ups and downs that have been followed to a greater or lesser extent: some of the periods in which the classical canons have been returned to were the Renaissance, Baroque classicism, neoclassicism and academicism. Medieval art The art historian Wolfgang Schöne divided the history of painting in terms of light into two periods: "proper light" (eigenlicht), which would correspond to medieval art; and "illuminating light" (beleuchtungslicht), which would develop in modern and contemporary art (Über das Licht in der Malerei, Berlin, 1979). In the Middle Ages, light had a strong symbolic component in art, since it was considered a reflection of divinity. Within medieval scholastic philosophy, a current called the aesthetics of light emerged, which identified light with divine beauty, and greatly influenced medieval art, especially Gothic art: the new Gothic cathedrals were brighter, with large windows that flooded the interior space, which was indefinite, without limits, as a concretion of an absolute, infinite beauty. The introduction of new architectural elements such as the pointed arch and the ribbed vault, together with the use of buttresses and flying buttresses to support the weight of the building, allowed the opening of windows covered with stained glass that filled the interior with light, which gained in transparency and luminosity. These stained-glass windows allowed the light that entered through them to be nuanced, creating fantastic plays of light and color, fluctuating at different times of the day, which were reflected in a harmonious way in the interior of the buildings. Light was associated with divinity, but also with beauty and perfection: according to Saint Bonaventure (De Intelligentii), the perfection of a body depends on its luminosity ("perfectio omnium eorum quae sunt in ordine universo, est lux"). William of Auxerre (Summa Aurea) also related beauty and light, so that a body is more or less beautiful according to its degree of radiance. This new aesthetics was parallel in many moments to the advances of science in subjects such as optics and the physics of light, especially thanks to the studies of Roger Bacon. At this time the works of Alhacen were also known, which would be collected by Witelo in De perspectiva (ca. 1270–1278) and Adam Pulchrae Mulieris in Liber intelligentiis (ca. 1230). The new prominence given to light in medieval times had a powerful influence on all artistic genres, to the point that Daniel Boorstein points out that "it was the power of light that produced the most modern artistic forms, because light, the almost instantaneous messenger of sensation, is the swiftest and most transitory element". In addition to architecture, light had a special influence on the miniature, with manuscripts illuminated with bright and brilliant colors, generally thanks to the use of pure colors (white, red, blue, green, gold and silver), which gave the image a great luminosity, without shades or chiaroscuro. The conjugation of these elementary colors generates light by the overall concordance, thanks to the approximation of the inks, without having to resort to shading effects to outline the contours. The light radiates from the objects, which are luminous without the need for the play of volumes that will be characteristic of modern painting. In particular, the use of gold in medieval miniatures generated areas of great light intensity, often contrasted with cold and light tones, to provide greater chromaticism. However, in painting, light did not have the prominence it had in architecture: medieval "proper light" was alien to reality and without contact with the spectator, since it neither came from outside – lacking a light source – nor went outward, since it did not expand light. Chiaroscuro was not used, since shadow was forbidden as it was considered a refuge for evil. Light was considered of divine origin and conqueror of darkness, so it illuminated everything equally, with the consequence of the lack of modeling and volume in the objects, a fact that resulted in the weightless and incorporeal image that was sought to emphasize spirituality. Although there is a greater interest in the representation of light, it is more symbolic than naturalistic. Just as in architecture the stained glass windows created a space where illumination took on a transcendent character, in painting a spatial staging was developed through gold backgrounds, which although they did not represent a physical space, they did represent a metaphysical realm, linked to the sacred. This "gothic light" was a feigned illumination and created a type of unreal image that transcended mere nature. The gold background reinforced the sacred symbolism of light: the figures are immersed in an indeterminate space of unnatural light, a scenario of sacred character where figures and objects are part of the religious symbolism. Cennino Cennini (Il libro dell'Arte), compiled various technical procedures for the use of gold leaf in painting (backgrounds, draperies, nimbuses), which remained in force until the 16th century. Gold leaf was used profusely, especially in halos and backgrounds, as can be seen in Duccio's Maestà, which shone brightly in the interior of the cathedral of Siena. Sometimes, before applying the gold leaf, a layer of red clay was spread; after wetting the surface and placing the gold leaf, it was smoothed and polished with ivory or a smooth stone. To achieve more brilliance and to catch the light, incisions were made in the gilding. It is noteworthy that in early Gothic painting there are no shadows, but the entire representation is uniformly illuminated; according to Hans Jantzen, "to the extent that medieval painting suppresses the shadow, it raises its sensitive light to the power of a super-sensible light". In Gothic painting there is a progressive evolution in the use of light: the linear or Franco-Gothic Gothic was characterized by linear drawing and strong chromaticism, and gave greater importance to the luminosity of flat color than to tonality, emphasizing chromatic pigment as opposed to luminous gradation. With the Italic or Trecentist Gothic a more naturalistic use of light began, characterized by the approach to the representation of depth – which would crystallize in the Renaissance with the linear perspective – the studies on anatomy and the analysis of light to achieve tonal nuance, as seen in the work of Cimabue, Giotto, Duccio, Simone Martini, and Ambrogio Lorenzetti. In the Flemish Gothic period, the technique of oil painting emerged, which provided brighter colors and allowed their gradation in different chromatic ranges, while facilitating greater detail in the details (Jan van Eyck, Rogier van der Weyden, Hans Memling, Gerard David). Between the 13th and 14th centuries a new sensibility towards a more naturalistic representation of reality emerged in Italy, which had as one of its contributing factors the study of a realistic light in the pictorial composition. In the frescoes of the Scrovegni Chapel (Padua), Giotto studied how to distinguish flat and curved surfaces by the presence or absence of gradients and how to distinguish the orientation of flat surfaces by three tones: lighter for horizontal surfaces, medium for frontal vertical surfaces and darker for receding vertical surfaces. Giotto was the first painter to represent sunlight, a type of soft, transparent illumination, but one that already served to model figures and enhance the quality of clothes and objects. For his part, Taddeo Gaddi – in his Annunciation to the Shepherds (Baroncelli Chapel, Santa Croce, Florence) – depicted divine light in a night scene with a visible light source and a rapid fall in the pattern of light distribution characteristic of point sources of light, through contrasts of yellow and violet. In the Netherlands, the brothers Hubert and Jan van Eyck and Robert Campin sought to capture various plays of light on surfaces of different textures and sheen, imitating the reflections of light on mirrors and metallic surfaces and highlighting the brilliance of colored jewels and gems (Triptych of Mérode, by Campin, 1425–1428; Polyptych of Ghent, by Hubert and Jan van Eyck, 1432). Hubert was the first to develop a certain sense of saturation of light in his Hours of Turin (1414-1417), in which he recreated the first "modern landscapes" of Western painting – according to Kenneth Clark. In these small landscapes the artist recreates effects such as the reflection of the evening sky on the water or the light sparkling on the waves of a lake, effects that would not be seen again until the Dutch landscape painting of the 17th century. In the Ghent Polyptych (1432, Saint Bavo's Cathedral, Ghent), by Hubert and Jan, the landscape of The Adoration of the Mystic Lamb melts into light in the celestial background, with a subtlety that only the Baroque Claude of Lorraine would later achieve. Jan van Eyck developed the light experiments of his brother and managed to capture an atmospheric luminosity of naturalistic aspect in his works, in paintings such as The Virgin of Chancellor Rolin (1435, Louvre Museum, Paris), or The Arnolfini Marriage (1434, The National Gallery, London), where he combines the natural light that enters through two side windows with that of a single candle lit on the candlestick, which here has a more symbolic than plastic value, since it symbolizes human life. In Van Eyck's workshop, oil painting was developed, which gave a greater luminosity to the painting thanks to the glazes: in general, they applied a first layer of tempera, more opaque, on which they applied the oil (pigments ground in oil), which is more transparent, through several thin layers that let the light pass through, achieving greater luminosity, depth and tonal and chromatic richness. Other Dutch artists who stood out in the expression of light were: Dirk Bouts, who in his works enhances with light the coloring and, in general, the plastic sense of the composition; Petrus Christus, whose use of light approaches a certain abstraction of the forms; and Geertgen tot Sint Jans, author in some of his works of surprising light effects, as in his Nativity (1490, National Gallery, London), where the light emanates from the body of the Child Jesus in the cradle, symbol of the Divine Grace. Modern Age Art Renaissance The art of the Modern Age – not to be confused with modern art, which is often used as a synonym for contemporary art – began with the Renaissance, which emerged in Italy in the 15th century (Quattrocento), a style influenced by classical Greco-Roman art and inspired by nature, with a more rational and measured component, based on harmony and proportion. Linear perspective emerged as a new method of composition and light became more naturalistic, with an empirical study of physical reality. Renaissance culture meant a return to rationalism, the study of nature, empirical research, with a special influence of classical Greco-Roman philosophy. Theology took a back seat and the object of study of the philosopher returned to the human being (humanism). In the Renaissance, the use of canvas as a support and the technique of oil painting became widespread, especially in Venice from 1460. Oil painting provided a greater chromatic richness and facilitated the representation of brightness and light effects, which could be represented in a wider range of shades. In general, Renaissance light tended to be intense in the foreground, diminishing progressively towards the background. It was a fixed lighting, which meant an abstraction with respect to reality, since it created an aseptic space subordinated to the idealizing character of Renaissance painting; to reconvert this ideal space into a real atmosphere, a slow process was followed based on the subordination of volumetric values to lighting effects, through the dissolution of the solidity of forms in the luminous space. During this period, chiaroscuro was recovered as a method to give relief to objects, while the study of gradation as a technique to diminish the intensity of color and modeling to graduate the different values of light and shadow was deepened. Renaissance natural light not only determined the space of the pictorial composition, but also the volume of figures and objects. It is a light that loses the metaphorical character of Gothic light and becomes a tool for measuring and ordering reality, shaping a plastic space through a naturalistic representation of light effects. Even when light retains a metaphorical reference – in religious scenes – it is a light subordinated to the realistic composition. Light had a special relevance in landscape painting, a genre in which it signified the transition from a symbolic representation in medieval art to a naturalistic transcription of reality. Light is the medium that unifies all parts of the composition into a structured and coherent whole. According to Kenneth Clark, "the sun shines for the first time in the landscape of the Flight into Egypt that Gentile da Fabriano painted in his Adoration of 1423. This sun is a golden disk, which is reminiscent of medieval symbolism, but its light is already fully naturalistic, spilling over the hillside, casting shadows and creating the compositional space of the image. In the Renaissance, the first theoretical treatises on the representation of light in painting appeared: Leonardo da Vinci dedicated a good part of his Treatise on Painting to the scientific study of light. Albrecht Dürer investigated a mathematical procedure to determine the location of shadows cast by objects illuminated by point source lights, such as candlelight. Giovanni Paolo Lomazzo devoted the fourth book of his Trattato (1584) to light, in which he arranged light in descending order from primary sunlight, divine light and artificial light to the weaker secondary light reflected by illuminated bodies. Cennino Cennini took up in his treatise Il libro dell'arte the rule of Philoponus on the creation of distance by contrasts: "the farther away you want the mountains to appear, the darker you will make your color; and the closer you want them to appear, the lighter you will make the colors". Another theoretical reference was Leon Battista Alberti, who in his treatise De pictura (1435) pointed out the indissolubility of light and color, and affirmed that "philosophers say that no object is visible if it is not illuminated and has no color. Therefore they affirm that between light and color there is a great interdependence, since they make themselves reciprocally visible". In his treatise, Alberti pointed out three fundamental concepts in painting: circumscriptio (drawing, outline), compositio (arrangement of the elements), and luminum receptio (illumination). He stated that color is a quality of light and that to color is to "give light" to a painting. Alberti pointed out that relief in painting was achieved by the effects of light and shadow (lumina et umbrae), and warned that "on the surface on which the rays of light fall the color is lighter and more luminous, and that the color becomes darker where the strength of the light gradually diminishes." Likewise, he spoke of the use of white as the main tool for creating brilliance: "the painter has nothing but white pigment (album colorem) to imitate the flash (fulgorem) of the most polished surfaces, just as he has nothing but black to represent the most extreme darkness of the night. Thus, the darker the general tone of the painting, the more possibilities the artist has to create light effects, as they will stand out more. Alberti's theories greatly influenced Florentine painting in the mid-15th century, so much so that this style is sometimes called pittura di luce (light painting), represented by Domenico Veneziano, Fra Angelico, Paolo Uccello, Andrea del Castagno and the early works of Piero della Francesca. Domenico Veneziano, who as his name indicates was originally from Venice but settled in Florence, was the introducer of a style based more on color than on line. In one of his masterpieces, The Virgin and Child with Saint Francis, Saint John the Baptist, Saint Cenobius and Saint Lucy (c. 1445, Uffizi, Florence), he achieved a believably naturalistic representation by combining the new techniques of representing light and space. The solidity of the forms is solidly based on the light-shadow modeling, but the image also has a serene and radiant atmosphere that comes from the clear sunlight that floods the courtyard where the scene takes place, one of the stylistic hallmarks of this artist. Fra Angelico synthesized the symbolism of the spiritual light of medieval Christianity with the naturalism of Renaissance scientific light. He knew how to distinguish between the light of dawn, noon and twilight, a diffuse and non-contrasting light, like an eternal spring, which gives his works an aura of serenity and placidity that reflects his inner spirituality. In Scenes from the Life of Saint Nicholas (1437, Pinacoteca Vaticana, Rome) he applied Alberti's method of balancing illuminated and shaded halves, especially in the figure with his back turned and the mountainous background. Uccello was also a great innovator in the field of pictorial lighting: in his works – such as The Battle of San Romano (1456, Musée du Louvre, Paris) – each object is conceived independently, with its own lighting that defines its corporeality, in conjunction with the geometric values that determine its volume. These objects are grouped together in a scenographic composition, with a type of artificial lighting reminiscent of that of the performing arts. In turn, Piero della Francesca used light as the main element of spatial definition, establishing a system of volumetric composition in which even the figures are reduced to mere geometric outlines, as in The Baptism of Christ (1440-1445, The National Gallery, London). According to Giulio Carlo Argan, Piero did not consider "a transmission of light, but a fixation of light", which turns the figures into references of a certain definition of space. He carried out scientific studies of perspective and optics (De prospectiva pingendi) and in his works, full of a colorful luminosity of great beauty, he uses light as both an expressive and symbolic element, as can be seen in his frescoes of San Francesco in Arezzo. Della Francesca was one of the first modern artists to paint night scenes, such as The Dream of Constantine (Legend of the Cross, 1452–1466, San Francesco in Arezzo). He cleverly assimilated the luminism of the Flemish school, which he combined with Florentine spatialism: in some of his landscapes there are luminous moonscapes reminiscent of the Van Eyck brothers, although transcribed with the golden Mediterranean light of his native Umbria. Masaccio was a pioneer in using light to emphasize the drama of the scene, as seen in his frescoes in the Brancacci chapel of Santa Maria del Carmine (Florence), where he uses light to configure and model the volume, while the combination of light and shadow serves to determine the space. In these frescoes, Masaccio achieved a sense of perspective without resorting to geometry, as would be usual in linear perspective, but by distributing light among the figures and other elements of the representation. In The Tribute of the Coin, for example, he placed a light source outside the painting that illuminates the figures obliquely, casting shadows on the ground with which the artist plays. Straddling the Gothic and Renaissance periods, Gentile da Fabriano was also a pioneer in the naturalistic use of light: in the predella of the Adoration of the Magi (1423, Uffizi, Florence) he distinguished between natural, artificial and supernatural light sources, using a technique of gold leaf and graphite to create the illusion of light through tonal modeling. Sandro Botticelli was a Gothic painter who moved away from the naturalistic style initiated by Masaccio and returned to a certain symbolic concept of light. In The Birth of Venus (1483-1485, Uffizi, Florence), he symbolized the dichotomy between matter and spirit with the contrast between light and darkness, in line with the Neoplatonic theories of the Florentine Academy of which he was a follower: on the left side of the painting the light corresponds to the dawn, both physical and symbolic, since the female character that appears embracing Zephyrus is Aurora, the goddess of dawn; on the right side, darker, are the earth and the forest, as metaphorical elements of matter, while the character that tends a mantle to Venus is the Hour, which personifies time. Venus is in the center, between day and night, between sea and land, between the divine and the human. A remarkable pictorial school emerged in Venice, characterized by the use of canvas and oil painting, where light played a fundamental role in the structuring of forms, while great importance was given to color: chromaticism would be the main hallmark of this school, as it would be in the 16th century with Mannerism. Its main representatives were Carlo Crivelli, Antonello da Messina, and Giovanni Bellini. In the Altarpiece of Saint Job (c. 1485, Gallerie dell'Accademia, Venice), Bellini brought together for the first time the Florentine linear perspective with Venetian color, combining space and atmosphere, and made the most of the new oil technique initiated in Flanders, thus creating a new artistic language that was quickly imitated. According to Kenneth Clark, Bellini "was born with the landscape painter's greatest gift: emotional sensitivity to light". In his Christ on the Mount of Olives (1459, National Gallery, London) he made the effects of light the driving force of the painting, with a shadowy valley in which the rising sun peeks through the hills. This emotive light is also seen in his Resurrection at the Staatliche Museen in Berlin (1475-1479), where the figure of Jesus radiates a light that bathes the sleeping soldiers. While his early works are dominated by sunrises and sunsets, in his mature production he appreciates more the full light of day, in which the forms merge with the general atmosphere. However, he also knew how to take advantage of the cold and pale lights of winter, as in the Virgin of the Meadow (1505, National Gallery, London), where a pale sun struggles with the shadows of the foreground, creating a fleeting effect of marble light. The Renaissance saw the emergence of the sfumato technique, traditionally attributed to Leonardo da Vinci, which consisted of the degradation of light tones to blur the contours and thus give a sense of remoteness. This technique was intended to give greater verisimilitude to the pictorial representation, by creating effects similar to those of human vision in environments with a wide perspective. The technique consisted of a progressive application of glazes and the feathering of the shadows to achieve a smooth gradient between the various parts of light and shadow of the painting, with a tonal gradation achieved with progressive retouching, leaving no trace of the brushstroke. It is also called "aerial perspective", since its results resemble the vision in a natural environment determined by atmospheric and environmental effects. This technique was used, in addition to Leonardo, by Dürer, Giorgione and Bernardino Luini, and later by Velázquez and other Baroque painters. Leonardo was essentially concerned with perception, the observation of nature. He sought life in painting, which he found in color, in the light of chromaticism. In his Treatise on Painting (1540) he stated that painting is the sum of light and darkness (chiaroscuro), which gives movement, life: according to Leonardo, darkness is the body and light is the spirit, and the mixture of both is life. In his treatise he established that "painting is a composition of light and shadows, combined with the various qualities of all the simple and compound colors". He also distinguished between illumination (lume) and brilliance (lustro), and warned that "opaque bodies with hard and rough surface never generate luster in any illuminated part". The Florentine polymath included light among the main components of painting and pointed it out as an element that articulates pictorial representation and conditions the spatial structure and the volume and chromaticism of objects and figures. He was also concerned with the study of shadows and their effects, which he analyzed together with light in his treatise. He also distinguished between shadow (ombra) and darkness (tenebre), the former being an oscillation between light and darkness. He also studied nocturnal painting, for which he recommended the presence of fire as a means of illumination, and he wrote down the different necessary gradations of light and color according to the distance from the light source. Leonardo was one of the first artists to be concerned with the degree of illumination of the painter's studio, suggesting that for nudes or carnations the studio should have uncovered lights and red walls, while for portraits the walls should be black and the light diffused by a canopy. Leonardo's subtle chiaroscuro effects are perceived in his female portraits, in which the shadows fall on the faces as if submerging them in a subtle and mysterious atmosphere. In these works he advocated intermediate lights, stating that "the contours and figures of dark bodies are poorly distinguished in the dark as well as in the light, but in the intermediate zones between light and shadow they are better perceived". Likewise, on color he wrote that "colors placed in shadows will participate to a greater or lesser degree in their natural beauty according as they are placed in greater or lesser darkness. But if the colors are placed in a luminous space, then they will possess a beauty all the greater the more splendorous the luminosity". The other great name of the early Cinquecento was Raphael, a serene and balanced artist whose work shows a certain idealism framed in a realistic technique of great virtuoso execution. According to Giovanni Paolo Lomazzo, Raphael "has given enchanting, loving and sweet light, so that his figures appear beautiful, pleasing and intricate in their contours, and endowed with such relief that they seem to move." Some of his lighting solutions were quite innovative, with resources halfway between Leonardo and Caravaggio, as seen in The Transfiguration (1517-1520, Vatican Museums, Vatican City), in which he divides the image into two halves, the heavenly and the earthly, each with different pictorial resources. In the Liberation of Saint Peter (1514, Vatican Museums, Vatican City) he painted a nocturnal scene in which the light radiating from the angel in the center stands out, giving a sensation of depth, while at the same time it is reflected in the breastplates of the guards, creating intense luminous effects. This was perhaps the first work to include artificial lighting with a naturalistic sense: the light radiating from the angel influences the illumination of the surrounding objects, while diluting the distant forms. Outside Italy, Albrecht Dürer was especially concerned with light in his watercolor landscapes, treated with an almost topographical detail, in which he shows a special delicacy in the capture of light, with poetic effects that prelude the sentimental landscape of Romanticism. Albrecht Altdorfer showed a surprising use of light in The Battle of Alexander at Issos (1529, Alte Pinakothek, Munich), where the appearance of the sun among the clouds produces a supernatural refulgence, effects of bubbling lights that also precede Romanticism. Matthias Grünewald was a solitary and melancholic artist, whose original work reflects a certain mysticism in the treatment of religious themes, with an emotive and expressionist style, still with medieval roots. His main work was the altar of Isenheim (1512-1516, Museum Unterlinden, Colmar), in which the refulgent halo in which he places his Risen Christ stands out. Between Gothic and Renaissance is the unclassifiable work of Bosch, a Flemish artist gifted with a great imagination, author of dreamlike images that continue to surprise for their fantasy and originality. In his works – and especially in his landscape backgrounds – there is a great skill in the use of light in different temporal and environmental circumstances, but he also knew how to recreate in his infernal scenes fantastic effects of flames and fires, as well as supernatural lights and other original effects, especially in works such as The Last Judgment (c. 1486–1510, Groenige Museum, Bruges), Visions of the Beyond (c. 1490, Doge's Palace, Venice), The Garden of Earthly Delights (c. 1500–1505, Museo del Prado, Madrid), The Hay Chariot (c. 1500–1502, Museo del Prado, Madrid) or The Temptations of Saint Anthony (c. 1501, Museum of Fine Arts, Lisbon). Bosch had a predilection for the effects of light generated by fire, by the glow of flames, which gave rise to a new series of paintings in which the effects of violent and fantastic lights originated by fire stood out, as is denoted in a work by an anonymous artist linked to the workshop of Lucas van Leyden, Lot and his daughters (c. 1530, Musée du Louvre, Paris), or in some works by Joachim Patinir, such as Charon crossing the Styx Lagoon (c. 1520–1524, Museo del Prado, Madrid) or Landscape with the Destruction of Sodom and Gomorrah (c. 1520, Boymans Van Beuningen Museum, Rotterdam). These effects also influenced Giorgione, as well as some Mannerist painters such as Lorenzo Lotto, Dosso Dossi and Domenico Beccafumi. Mannerism At the end of the High Renaissance, in the middle of the 16th century, Mannerism followed, a movement that abandoned nature as a source of inspiration to seek a more emotional and expressive tone, in which the artist's subjective interpretation of the work of art became more important, with a taste for sinuous and stylized form, with deformation of reality, distorted perspectives and gimmicky atmospheres. In this style light was used in a gimmicky way, with an unreal treatment, looking for a colored light of different origins, both a cold moonlight and a warm firelight. Mannerism broke with the full Renaissance light by introducing night scenes with intense chromatic interplay between light and shadow and a dynamic rhythm far from Renaissance harmony. Mannerist light, in contrast to Renaissance classicism, took on a more expressive function, with a natural origin but an unreal treatment, a disarticulating factor of the classicist balance, as seen in the work of Pontormo, Rosso or Beccafumi. In Mannerism, the Renaissance optical scheme of light and shadow was broken by suppressing the visual relationship between the light source and the illuminated parts of the painting, as well as in the intermediate steps of gradation. The result was strong contrasts of color and chiaroscuro, and an artificial and refulgent aspect of the illuminated parts, independent of the light source. Between Renaissance classicism and Mannerism lies the work of Michelangelo, one of the most renowned artists of universal stature. His use of light was generally with plastic criteria, but sometimes he used it as a dramatic resource, especially in his frescoes in the Pauline Chapel: Crucifixion of Saint Peter and Conversion of Saint Paul (1549). Placed on opposite walls, the artist valued the entry of natural light into the chapel, which illuminated one wall and left the other in semi-darkness: in the darkest part he placed the Crucifixion, a subject more suitable for the absence of light, which emphasizes the tragedy of the scene, intensified in its symbolic aspect by the fading light of dusk that is perceived on the horizon; instead, the Conversion receives natural light, but at the same time the pictorial composition has more luminosity, especially for the powerful ray of light that comes from the hand of Christ and is projected on the figure of Saul, who thanks to this divine intervention is converted to Christianity. Another reference of Mannerism was Correggio, the first artist – according to Vasari – to apply a dark tone in contrast to light to produce effects of depth, while masterfully developing the Leonardoesque sfumato through diffuse lights and gradients. In his work The Nativity (1522, Gemäldegalerie Alte Meister, Dresden) he was the first to show the birth of Jesus as a "miracle of light", an assimilation that would become habitual from then on. In The Assumption of the Virgin (1526-1530), painted on the dome of the cathedral of Parma, he created an illusionistic effect with figures seen from below (sotto in sù) that would be the forerunner of Baroque optical illusionism; in this work the subtle nuances of his flesh tones stand out, as well as the luminous break of glory of its upper part. Jacopo Pontormo, a disciple of Leonardo, developed a strongly emotional, dynamic style with unreal effects of space and scale, in which a great mastery of color and light can be glimpsed, applied by color stains, especially red. Domenico Beccafumi stood out for his colorism, fantasy and unusual light effects, as in The Birth of the Virgin (1543, Pinacoteca Nazionale di Siena). Rosso Fiorentino also developed an unusual coloring and fanciful play of light and shadow, as in his Descent of Christ (1521, Pinacoteca Comunale, Volterra). Luca Cambiasso showed a great interest in nocturnal illumination, which is why he is considered a forerunner of tenebrism. Bernardino Luini, a disciple of Leonardo, showed a Leonardoesque treatment of light in the Madonna of the Rosebush (c. 1525–1530, Pinacoteca di Brera). Alongside this more whimsical mannerism, a school of a more serene style emerged in Venice that stood out for its treatment of light, which subordinated plastic form to luminous values, as can be seen in the work of Giorgione, Titian, Tintoretto and Veronese. In this school, light and color were fused, and Renaissance linear perspective was replaced by aerial perspective, the use of which would culminate in the Baroque. The technique used by these Venetian painters is called "tonalism": it consisted in the superimposition of glazes to form the image through the modulation of color and light, which are harmonized through relations of tone modulating them in a space of plausible appearance. The color assumes the function of light and shadow, and it is the chromatic relationships that create the effects of volume. In this modality, the chromatic tone depends on the intensity of light and shadow (the color value). Giorgione brought the Leonardesque influence to Venice. He was an original artist, one of the first to specialize in cabinet paintings for private collectors, and the first to subordinate the subject of the work to the evocation of moods. Vasari considered him, together with Leonardo, one of the founders of "modern painting". A great innovator, he reformulated landscape painting both in composition and iconography, with images conceived in depth with a careful modulation of chromatic and light values, as is evident in one of his masterpieces, The Tempest (1508, Gallerie dell'Accademia, Venice).Titian was a virtuoso in the recreation of vibrant atmospheres with subtle shades of light achieved with infinite variations obtained after a meticulous study of reality and a skillful handling of the brushes that demonstrated a great technical mastery. In his Pentecost (1546, Santa Maria della Salute, Venice) he made rays of light emanate from the dove representing the Holy Spirit, ending in tongues of fire on the heads of the Virgin and the apostles, with surprising light effects that were innovative for his time. This research gradually evolved into increasingly dramatic effects, giving more emphasis to artificial lighting, as seen in The Martyrdom of Saint Lawrence (1558, Jesuit Church, Venice), where he combines the light of the torches and the fire of the grill where the saint is martyred with the supernatural effect of a powerful flash of divine light in the sky that is projected on the figure of the saint. This experimentation with light influenced the work of artists such as Veronese, Tintoretto, Jacopo Bassano and El Greco. Tintoretto liked to paint enclosed in his studio with the windows closed by the light of candles and torches, which is why his paintings are often called di notte e di fuoco ("by night and fire"). In his works, of deep atmospheres, with thin and vertical figures, the violent effects of artificial lights stand out, with strong chiaroscuro and phosphorescent effects. These luminous effects were adopted by other members of the Venetian school such as the Bassano (Jacopo, Leandro, and Francesco), as well as by the so-called "Lombard illuminists" (Giovanni Girolamo Savoldo, Moretto da Brescia), while influencing El Greco and Baroque tenebrism. Another artist framed in the painting di notte e di fuoco was Jacopo Bassano, whose indirect incidence lights influenced Baroque naturalism. In works such as Christ in the House of Mary, Martha and Lazarus (c. 1577, Museum of Fine Arts, Houston), he combined natural and artificial lights with striking lighting effects. For his part, Paolo Veronese was heir to the luminism of Giovanni Bellini and Vittore Carpaccio, in scenes of Palladian architecture with dense morning lights, golden and warm, without prominent shadows, emphasizing the brightness of fabrics and jewels. In Allegory of the Battle of Lepanto (1571) he divided the scene into two halves, the battle below and the Virgin with the saints who ask for her favor for the battle at the top, where angels are placed, throwing lightning bolts towards the battle, creating spectacular lighting effects. Outside Italy it is worth mentioning the work of Pieter Brueghel the Elder, author of costumist scenes and landscapes that denote a great sensitivity towards nature. In some of his works the influence of Hieronymous Bosch can be seen in his fire lights and fantastic effects, as in The Triumph of Death (c. 1562, Museo del Prado, Madrid). In some of his landscapes he added the sun as a direct source of luminosity, such as the yellow sun of The Flemish Proverbs (1559, Staatliche Museen, Berlin), the red winter sun of The Census in Bethlehem (1556, Royal Museums of Fine Arts of Belgium, Brussels) or the evening sun of Landscape with the Fall of Icarus (c. 1558, Royal Museums of Fine Arts of Belgium, Brussels). El Greco worked in Spain during this period, a singular painter who developed an individual style, marked by the influence of the Venetian school, the city where he lived for a time, as well as Michelangelo, from whom he took his conception of the human figure. In El Greco's work, light always prevails over shadows, as a clear symbolism of the preeminence of faith over unbelief. In one of his first works from Toledo, the Expolio for the sacristy of the cathedral of Toledo (1577), a zenithal light illuminates the figure of Jesus, focusing on his face, which becomes the focus of light in the painting. In the Trinity of the church of Santo Domingo el Antiguo (1577-1580) he introduced a dazzling Gloria light of an intense golden yellow. In The Martyrdom of Saint Maurice (1580-1582, Royal Monastery of San Lorenzo de El Escorial) he created two areas of differentiated light: the natural light that surrounds the earthly characters and that of the breaking of the glory in the sky, furrowed with angels. Among his last works stands out The Adoration of the Shepherds (1612-1613, Museo del Prado, Madrid), where the focus of light is the Child Jesus, who radiates his luminosity around producing phosphorescent effects of strong chromatism and luminosity. El Greco's illumination evolved from the light coming from a specific point – or in a diffuse way – of the Venetian school to a light rooted in Byzantine art, in which the figures are illuminated without a specific light source or even a diffuse light. It is an unnatural light, which can come from multiple sources or none at all, an arbitrary and unequal light that produces hallucinatory effects. El Greco had a plastic conception of light: his execution went from dark to light tones, finally applying touches of white that created shimmering effects. The refulgent aspect of his works was achieved through glazes, while the whites were finished with almost dry applications. His light is mystical, subjective, almost spectral in appearance, with a taste for shimmering gleams and incandescent reflections. Barroco In the 17th century, the Baroque emerged, a more refined and ornamented style, with the survival of a certain classicist rationalism but with more dynamic and dramatic forms, with a taste for the surprising and the anecdotal, for optical illusions and effects. Baroque painting had a marked geographical differentiating accent, since its development took place in different countries, in various national schools, each with a distinctive stamp. However, there is a common influence coming again from Italy, where two opposing trends emerged: naturalism (also called caravagism), based on the imitation of natural reality, with a certain taste for chiaroscuro – the so-called tenebrism – and classicism, which is realistic but with a more intellectual and idealized concept of reality. Later, in the so-called "full baroque" (second half of the 17th and early 18th centuries), painting evolved to a more decorative style, with a predominance of mural painting and a certain predilection for optical effects (trompe-l'œil) and luxurious and exuberant scenographies. During this period, many scientific studies on light were carried out (Johannes Kepler, Francesco Maria Grimaldi, Isaac Newton, Christiaan Huygens, Robert Boyle), which influenced its pictorial representation. Newton proved that color comes from the spectrum of white light and designed the first chromatic circle showing the relationships between colors. In this period the maximum degree of perfection was reached in the pictorial representation of light and the tactile form was diluted in favor of a greater visual impression, achieved by giving greater importance to light, losing the form the accuracy of its contours. In the Baroque, light was studied for the first time as a system of composition, articulating it as a regulating element of the painting: light fulfills several functions, such as symbolic, modeling and illumination, and begins to be directed as an emphatic element, selective of the part of the painting to be highlighted, so that artificial light becomes more important, which can be manipulated at the free will of the artist. Sacred light (nimbus, haloes) was abandoned and natural light was used exclusively, even as a symbolic element. On the other hand, the light of different times of the day (morning, twilight) began to be distinguished. Illumination was conceived as a luminous unit, as opposed to the multiple sources of Renaissance light; in the Baroque there may be several sources, but they are circumscribed to a global and unitary sense of the work. In the Baroque, the nocturne genre became fashionable, which implies a special difficulty in terms of the representation of light, due to the absence of daylight, so that on numerous occasions it was necessary to resort to chiaroscuro and lighting effects from artificial light, while the natural light should come from the moon or the stars. For artificial light, bonfires, candles, lanterns, lanterns, candles, fireworks or similar elements were used. These light sources could be direct or indirect, they could appear in the painting or illuminate the scene from outside. Naturalism Chiaroscuro resurfaced during the Baroque, especially in the Counter-Reformation, as a method of focusing the viewer's vision on the primordial parts of religious paintings, which were emphasized as didactic elements, as opposed to the Renaissance "pictorial decor". An exacerbated variant of chiaroscuro was tenebrism, a technique based on strong contrasts of light and shadow, with a violent type of lighting, generally artificial, which gives greater prominence to the illuminated areas, on which a powerful focus of directed light is placed. These effects have a strong dramatism, which emphasizes the scenes represented, generally of religious type, although they also abound in mythological scenes, still lifes or vanitas. One of its main representatives was Caravaggio, as well as Orazio and Artemisia Gentileschi, Bartolomeo Manfredi, Carlo Saraceni, Giovanni Battista Caracciolo, Pieter van Laer (il Bamboccio), Adam Elsheimer, Gerard van Honthorst, Georges de La Tour, Valentin de Boulogne, the Le Nain brothers and José de Ribera (lo Spagnoletto). Caravaggio was a pioneer in the dramatization of light, in scenes set in dark interiors with strong spotlights of directed light that used to emphasize one or more characters. With this painter, light acquired a structural character in painting, since, together with drawing and color, it would become one of its indispensable elements. He was influenced by Leonardo's chiaroscuro through The Virgin of the Rocks, which he was able to contemplate in the church of San Francesco il Grande in Milan. For Caravaggio, light served to configure the space, controlling its direction and expressive force. He was aware of the artist's power to shape the space at will, so in the composition of a work he would previously establish which lighting effects he was going to use, generally opting for sharp contrasts between the figures and the background, with darkness as a starting point: the figures emerge from the dark background and it is the light that determines their position and their prominence in the scene represented. Caravaggiesque light is conceptual, not imitative or symbolic, so it transcends materiality and becomes something substantial. It is a projected and solid light, which constitutes the basis of its spatial conception and becomes another volume in space. His main hallmark in depicting light was the diagonal entry of light, which he first used in Boy with a Basket of Fruit (1593-1594, Galleria Borghese, Rome). In La bonaventure (1595-1598, Musée du Louvre, Paris) he used a warm golden light of the sunset, which falls directly on the young man and obliquely on the gypsy woman. His pictorial maturity came with the canvases for the Contarelli chapel in the church of San Luigi dei Francesi in Rome (1599-1600): The Martyrdom of Saint Matthew and The Vocation of Saint Matthew. In the first, he established a composition formed by two diagonals defined by the illuminated planes and the shadows that form the volume of the figures, in a complex composition cohesive thanks to the light, which relates the figures to each other. In the second, a powerful beam of light that enters diagonally from the upper right directly illuminates the figure of Matthew, a beam parallel to the raised arm of Jesus and that seems to accompany his gesture; an open shutter of the central window cuts this beam of light at the top, leaving the left side of the image in semi-darkness. In works such as the Crucifixion of Saint Peter and the Conversion of Saint Paul (1600-1601, Cerasi Chapel, Santa Maria del Popolo, Rome) light makes objects and people glow, to the point that it becomes the true protagonist of the works; these scenes are immersed in light in a way that constitutes more than a simple attribute of reality, but rather the medium through which reality manifests itself. In the final stage of his career he accentuated the dramatic tension of his works through a luminism of flashing effects, as in Seven Works of Mercy (1607, Pio Monte della Misericordia, Naples), a nocturne with several spotlights of light that help to emphasize the acts of mercy depicted in simultaneous action. Artemisia Gentileschi trained with her father, Orazio Gentileschi, coinciding with the years when Caravaggio lived in Rome, whose work she could appreciate in San Luigi dei Francesi and Santa Maria del Popolo. His work was channeled in the tenebrist naturalism, assuming its most characteristic features: expressive use of light and chiaroscuro, dramatism of the scenes and figures of round anatomy. His most famous work is Judith beheading Holofernes (two versions: 1612–1613, Museo Capodimonte, Naples; and 1620, Uffizi, Florence), where the light focuses on Judith, her maid and the Assyrian general, against a complete darkness, emphasizing the drama of the scene. In the 1630s, established in Naples, his style adopted a more classicist component, without completely abandoning naturalism, with more diaphanous spaces and clearer and sharper atmospheres, although chiaroscuro remained an essential part of the composition, as a means to create space, give volume and expressiveness to the image. One of his best compositions due to the complexity of its lighting is The Birth of Saint John the Baptist (1630, Museo del Prado, Madrid), where he mixes natural and artificial light: the light from the portal in the upper right part of the painting softens the light inside the room, in a "subtle transition of light values" – according to Roberto Longhi – that would later become common in Dutch painting. Adam Elsheimer was noted for his light studies of landscape painting, with an interest in dawn and dusk lights, as well as night lighting and atmospheric effects such as mists and fogs. His light was strange and intense, with an enamel-like appearance typical of German painting, in a tradition ranging from Lukas Moser to Albrecht Altdorfer. His most famous painting is Flight into Egypt (1609, Alte Pinakothek, Munich), a night scene that is considered the first moonlit landscape; four sources of light are visible in this work: the shepherds' bonfire, the torch carried by Saint Joseph, the moon and its reflection in the water; the Milky Way can also be perceived, whose representation can also be considered as the first one done in a naturalistic way. Georges de La Tour was a magnificent interpreter of artificial light, generally lamp or candle lights, with a visible and precise focus, which he used to place inside the image, emphasizing its dramatic aspect. Sometimes, in order not to dazzle, the characters placed their hands in front of the candle, creating translucent effects on the skin, which acquired a reddish tone, of great realism and that proved his virtuosity in capturing reality. While his early works show the influence of Italian Caravaggism, from his stay in Paris between 1636 and 1643 he came closer to Dutch Caravaggism, more prone to the direct inclusion of the light source on the canvas. He thus began his most tenebrist period, with scenes of strong half-light where the light, generally from a candle, illuminates with greater or lesser intensity certain areas of the painting. In general, two types of composition can be distinguished: the fully visible light source (Job with his wife, Musée Départemental des Vosges, Épinal; Woman spurring herself, Musée Historique Lorrain, Nancy; Madeleine Terff, Musée du Louvre, Paris) or the light blocked by an object or character, creating a backlit illumination (Madeleine Fabius, Fabius collection, Paris; Angel appearing to Saint Joseph, Musée des Beaux-Arts, Nantes; The Adoration of the Shepherds, Musée du Louvre, Paris). In his later works he reduces the characters to schematic figures of geometric appearance, like mannequins, to fully recreate the effects of light on masses and surfaces (The Repentance of Saint Peter, Museum of Art, Cleveland; The Newborn, Musée des Beaux-Arts, Rennes; Saint Sebastian cured by Saint Irene, parish church of Broglie). Despite its plausible appearance, La Tour's lighting is not fully naturalistic, but is sifted by the artist's will; at all times he prints the desired amount of light and shadow to recreate the desired effect; in general, it is a serene and diffuse lighting, which brings out the volume without excessive drama. The light serves to unite the figures, to highlight the part of the painting that best suits the plot of the work, it is a timeless light of a poetic, transcendent character; it is just the right light necessary to provide credibility, but it serves a more symbolic than realistic purpose. It is an unreal light, since no candle generates such a serene and diffuse light, a conceptual and stylistic light, which serves only the compositional intention of the painter. Another French Caravaggist was Trophime Bigot, nicknamed Maître à la chandelle (Master of the candle) for his scenes of artificial light, in which he showed great expertise in the technique of chiaroscuro. The Valencian artist José de Ribera (nicknamed lo Spagnoletto), who lived in Naples, fully assumed the Caravaggesque light, with an anti-idealist style of pasty brushstrokes and dynamic effects of movement. Ribera assumed the tenebrist illumination in a personal way, sifted by other influences, such as Venetian coloring or the compositional rigor of Bolognese classicism. In his early work he used the violent contrasts of light and shadow characteristic of tenebrism, but from the 1630s he evolved to a greater chromaticism and clearer and more diaphanous backgrounds. In contrast to the flat painting of Caravaggio, Ribera used a dense paste that gave more volume and emphasized the brightness. One of his best works, Sileno ebrio (1626, Museum of Capodimonte, Naples) stands out for the flashes of light that illuminate the various characters, with special emphasis on the naked body of the Sileno, illuminated by a flat light of morbid appearance. In addition to Ribera, in Spain, Caravaggism had the figure of Juan Bautista Maíno, a Dominican friar who was drawing teacher of Philip IV, resident in Rome between 1598 and 1612, where he was a disciple of Annibale Carracci; his work stands out for its colorism and luminosity, as in The Adoration of the Shepherds (1611-1613, Museo del Prado, Madrid). Also noteworthy is the work of the still life painters Juan Sánchez Cotán and Juan van der Hamen. In general, Spanish naturalism treated light with a sense close to Caravaggism, but with a certain sensuality coming from the Venetian school and a detailing with Flemish roots. Francisco de Zurbarán developed a somewhat sweetened tenebrism, although one of his best works, San Hugo in the refectory of the Carthusian monks (c. 1630, Museo de Bellas Artes de Sevilla) stands out for the presence of white color, with a subtle play of light and shadow that stands out for the multiplicity of intensities applied to each figure and object. In Venice, Baroque painting did not produce such exceptional figures as in the Renaissance and Mannerism, but in the work of artists such as Domenico Fetti, Johann Liss, and Bernardo Strozzi one can perceive the vibrant luminism and the enveloping atmospheres so characteristic of Venetian painting. The Caravaggist novelties had a special echo in Holland, where the so-called Caravaggist School of Utrecht emerged, a series of painters who assumed the description of reality and the chiaroscuro effects of Caravaggio as pictorial principles, on which they developed a new style based on tonal chromaticism and the search for new compositional schemes, resulting in a painting that stands out for its optical values. Among its members were Hendrik Terbrugghen, Dirck van Baburen, and Gerard van Honthorst, all three trained in Rome. The first assumed the thematic repertoire of Caravaggio but with a more sweetened tone, with a sharp drawing, a grayish-silver chromatism and an atmosphere of soft light clarity. Van Baburen sought full light effects rather than chiaroscuro contrasts, with intense volumes and contours. Honthorst was a skillful producer of night scenes, which earned him the nickname Gherardo delle Notti ("Gerard of the Nights"). In works such as Christ before the High Priest (1617), Nativity (1622), The Prodigal Son (1623) or The Procuress (1625), he showed great mastery in the use of artificial light, generally from candles, with one or two light sources that illuminated the scene unevenly, highlighting the most significant parts of the painting and leaving the rest in semi-darkness. Of his Christ on the Column, Joachim von Sandrart said: "the brightness of the candles and lights illuminates everything with a naturalness that resembles life so closely that no art has ever reached such heights". One of the greatest exponents of the symbolic use of light was Rembrandt, an original artist with a strong personal stamp, with a style close to tenebrism but more diffused, without the marked contrasts between light and shadow typical of the Caravaggists, but a more subtle and diffuse penumbra. According to Giovanni Arpino, Rembrandt "invented light, not as heat, but as value. He invented light not to illuminate, but to make his world unapproachable". In general, he elaborated images where darkness predominated, illuminated in certain parts of the scene by a ray of zenithal light of divine connotation; if the light is inside the painting it means that the world is circumscribed to the illuminated part and nothing exists outside this light. Rembrandtian light is a reflection of an external force, which affects the objects causing them to radiate energy, like the retransmission of a message. Although he starts from tenebrism, his contrasts of light and shadow are not as sharp as those of Caravaggio, but he likes more a kind of golden shadows that give a mysterious air to his paintings. In Rembrandt, light was something structural, integrated in form, color and space, in such a way that it dematerializes bodies and plays with the texture of objects. It is a light that is not subject to the laws of physics, which he generally concentrates in one area of the painting, creating a glowing luminosity. In his work, light and shadow interact, dissolving the contours and deforming the forms, which become the sustaining object of the light. According to Wolfgang Schöne, in Rembrandt light and darkness are actually two types of light, one bright and the other dark. He used to use a canvas as a reflecting or diffusing screen, which he regulated as he wished to obtain the desired illumination in each scene. His concern for light led him not only to his pictorial study, but also to establish the correct placement of his paintings for optimal visualization; thus, in 1639 he advised Constantijn Huygens on the placement of his painting Samson blinded by the Philistines: "hang this painting where there is strong light, so that it can be seen from a certain distance, and thus it will have the best effect". Rembrandt also masterfully captured light in his etchings, such as The Hundred Florins and The Three Crosses, in which light is almost the protagonist of the scene. Rembrandt picked up the luminous tradition of the Venetian school, as did his compatriot Johannes Vermeer, although while the former stands out for his fantastic effects of light, the latter develops in his work a luminosity of great quality in the local tones. Vermeer imprinted his works – generally everyday scenes in interior spaces – with a pale luminosity that created placid and calm atmospheres. He used a technique called pointillé, a series of dots of pigment with which he enhanced the objects, on which he often applied a luminosity that made the surfaces reflect the light in a special way. Vermeer's light softens the contours without losing the solidity of the forms, in a combination of softness and precision that few other artists have achieved. Nicknamed the "painter of light", Vermeer masterfully synthesized light and color, he knew how to capture the color of light like no one else. In his works, light is itself a color, while shadow is inextricably linked to light. Vermeer's light is always natural, he does not like artificial light, and generally has a tone close to lemon yellow, which together with the dull blue and light gray were the main colors of his palette. It is the light that forms the figures and objects, and in conjunction with the color is what fixes the forms. As for the shadows, they are interspersed in the light, reversing the contrast: instead of fitting the luminous part of the painting into the shadows, it is the shadows that are cut out of the luminous space. Contrary to the practice of chiaroscuro, in which the form is progressively lost in the half-light, Vermeer placed a foreground of dark color to increase the tonal intensity, which reaches its zenith in the middle light; from here he dissolves the color towards white, instead of towards black as was done in chiaroscuro. In Vermeer's work, the painting is an organized structure through which light circulates, is absorbed and diffused by the objects that appear on the scene. He builds the forms thanks to the harmony between light and color, which is saturated, with a predominance of pure colors and cold tones. The light gives visual existence to the space, which in turn receives and diffuses it. Other prominent Dutch painters were Frans Hals and Jacob Jordaens. The former had a Caravaggist phase between 1625 and 1630, with a clear chromaticism and diffuse luminosity (The Merry Drinker, 1627–1628, Rijksmuseum, Amsterdam; Malle Babbe, 1629–1630, Gemäldegalerie, Berlin), to evolve later to a more sober, dark and monochromatic style. Jordaens had a style characterized by a bright and fantastic coloring, with strong contrasts of light and shadow and a technique of dense impasto. Between 1625 and 1630 he had a period in which he deepened the luminous values of his images, in works such as The Martyrdom of Saint Apollonia (1628, Church of Saint Augustine, Antwerp) or The Fecundity of the Earth (1630, Royal Museums of Fine Arts of Belgium, Brussels). One should also mention Godfried Schalcken, a disciple of Gerard Dou who worked not only in his native country but also in England and Germany. An excellent portraitist, in many of his works he used artificial candlelight or candle light, influenced by Rembrandt, as in Portrait of William III (1692-1697, Rijksmuseum, Amsterdam), Portrait of James Stuart, Duke of Lennox and Richmond (1692-1696, Leiden Collection, New York), Young Man and Woman Studying a Statue of Venus by Lamplight (c. 1690, Leiden Collection, New York) or Old Man Reading by Candlelight (c. 1700, Museo del Prado, Madrid). A genre that flourished in Holland in an exceptional way in this century was landscape painting, which, in line with the mannerist landscape painting of Pieter Brueghel the Elder and Joos de Momper, developed a new sensitivity to atmospheric effects and the reflections of the sun on water. Jan van Goyen was its first representative, followed by artists such as Salomon van Ruysdael, Jacob van Ruysdael, Meindert Hobbema, Aelbert Cuyp, Jan van de Cappelle and Adriaen van de Velde. Salomon van Ruysdael sought atmospheric capture, which he treated by tonalities, studying the light of different times of the day. His nephew Jacob van Ruysdael was endowed with a great sensitivity for natural vision, and his depressive character led him to elaborate images of great expressiveness, where the play of light and shadow accentuated the drama of the scene. His light is not the saturating and static light of the Renaissance, but a light in movement, perceptible in the effects of light and shadow in the clouds and their reflections in the plains, a light that led John Constable to formulate one of his lessons on art: "remember that light and shadow never stand still". His assistant was Meindert Hobbema, from whom he differed in his chromatic contrasts and lively light effects, which reveal a certain nervousness of stroke. Aelbert Cuyp used a much lighter palette than his compatriots, with a warmer and more golden light, probably influenced by Jan Both's "Italianate landscape". He stood out for his atmospheric effects, for the detail of the light reflections on objects or landscape elements, for the use of elongated shadows and for the use of the sun's rays diagonally and backlit, in line with the stylistic novelties produced in Italy, especially around the figure of Claudius of Lorraine. Another genre that flourished in Holland was the still life. One of its best representatives was Willem Kalf, author of still lifes of great precision in detail, which combined flowers, fruits and other foods with various objects generally of luxury, such as vases, Turkish carpets and bowls of Chinese porcelain, which emphasize their play of light and shadow and the bright reflections in the metallic and crystalline surfaces. Classicism and full Baroque Classicism emerged in Bologna, around the so-called Bolognese School, initiated by the brothers Annibale and Agostino Carracci. This trend was a reaction against mannerism, which sought an idealized representation of nature, representing it not as it is, but as it should be. It pursued the ideal beauty as its sole objective, for which it was inspired by classical Greco-Roman and Renaissance art. This ideal found an ideal subject of representation in the landscape, as well as in historical and mythological themes. In addition to the Carracci brothers, Guido Reni, Domenichino, Francesco Albani, Guercino and Giovanni Lanfranco stood out. In the classicist trend, the use of light is paramount in the composition of the painting, although with slight nuances depending on the artist: from the Incamminati and the Academy of Bologna (Carracci brothers), Italian classicism split into several currents: one moved more towards decorativism, with the use of light tones and shiny surfaces, where the lighting is articulated in large luminous spaces (Guido Reni, Lanfranco, Guercino); another specialized in landscape painting and, starting from the Carracci influence – mainly the frescoes of Palazzo Aldobrandini – developed along two parallel lines: the first focused more on classical-style composition, with a certain scenographic character in the arrangement of landscapes and figures (Poussin, Domenichino); the other is represented by Claude Lorrain, with a more lyrical component and greater concern for the representation of light, not only as a plastic factor but as an agglutinating element of a harmonious conception of the work. Claude Lorrain was one of the baroque painters who best knew how to represent light in his works, to which he gave a primordial importance at the time of conceiving the painting: the light composition served firstly as a plastic factor, being the basis with which he organized the composition, with which he created space and time, with which he articulated the figures, the architectures, the elements of nature; secondly, it was an aesthetic factor, highlighting light as the main sensitive element, as the medium that attracts and envelops the viewer and leads him to a dream world, a world of ideal perfection recreated by the atmosphere of total serenity and placidity that Claude created with his light. Claude's light was direct and natural, coming from the sun, which he placed in the middle of the scene, in sunrises or sunsets that gently illuminated all parts of the painting, sometimes placing in certain areas intense contrasts of light and shadow, or backlighting that impacted on a certain element to emphasize it. The artist from Lorraine emphasized color and light over the material description of the elements, which precedes to a great extent the luminous investigations of Impressionism. Claude's capture of light is unparalleled by any of his contemporaries: in the landscapes of Rembrandt or Ruysdael the light has more dramatic effects, piercing the clouds or flowing in oblique or horizontal rays, but in a directed manner, the source of which can be easily located. On the other hand, Claude's light is serene, diffuse; unlike the artists of his time, he gives it greater relevance if it is necessary to opt for a certain stylistic solution. On numerous occasions he uses the horizon line as a vanishing point, arranging in that place a focus of clarity that attracts the viewer, because that almost blinding luminosity acts as a focalizing element that brings the background closer to the foreground. The light is diffused from the background of the painting and, as it expands, it is enough by itself to create a sensation of depth, blurring the contours and degrading the colors to create the space of the painting. Lorena prefers the serene and placid light of the sun, direct or indirect, but always through a soft and uniform illumination, avoiding sensational effects such as moonlight, rainbows or storms, which were nevertheless used by other landscape painters of her time. His basic reference in the use of light is Elsheimer, but he differs from him in the choice of light sources and times represented: the German artist preferred exceptional light effects, nocturnal environments, moonlight or twilight; on the other hand, Claude prefers more natural environments, a limpid light of dawn or the refulgence of a warm sunset. On the other hand, the Flemish Peter Paul Rubens represents serenity in the face of Tenebrist dramatism. He was a master in finding the precise tonality for the flesh tones of the skin, as well as its different textures and the multiple variants of the effects of brightness and the reflections of light on the flesh. Rubens had an in-depth knowledge of the different techniques and traditions related to light, and so he was able to assimilate both Mannerist iridescent light and Tenebrist focal light, internal and external light, homogeneous and dispersed light. In his work, light serves as an organizing element of the composition, in such a way that it agglutinates all the figures and objects in a unitary mass of the same light intensity, with different compositional systems, either with central or diagonal illumination or combining a light in the foreground with another in the background. In his beginnings he was influenced by the Caravaggist chiaroscuro, but from 1615 he sought a greater luminosity based on the tradition of Flemish painting, so he accentuated the light tones and marked the contours more. His images stand out for their sinuous movement, with atmospheres built with powerful lights that helped to organize the development of the action, combining the Flemish tradition with the Venetian coloring that he learned in his travels to Italy. Perhaps where he experimented most in the use of light was in his landscapes, most of them painted in his old age, whose use of color and light with agile and vibrant brushstrokes influenced Velázquez and other painters of his time, such as Jordaens and Van Dyck, and artists of later periods such as Jean-Antoine Watteau, Jean-Honoré Fragonard, Eugène Delacroix, and Pierre-Auguste Renoir. Diego Velázquez was undoubtedly the most brilliant artist of his time in Spain, and one of the most internationally renowned. In the evolution of his style we can perceive a profound study of pictorial illumination, of the effects of light both on objects and on the environment, with which he reaches heights of great realism in the representation of his scenes, which however is not exempt from an air of classical idealization, which shows a clear intellectual background that for the artist was a vindication of the painter's craft as a creative and elevated activity. Velázquez was the architect of a space-light in which the atmosphere is a diaphanous matter full of light, which is freely distributed throughout a continuous space, without divisions of planes, in such a way that the light permeates the backgrounds, which acquire vitality and are as highlighted as the foreground. It is a world of instantaneous capture, alien to tangible reality, in which the light generates a dynamic effect that dilutes the contours, which together with the vibratory effect of the changing planes of light produces a sensation of movement. He usually alternated zones of light and shadow, creating a parallel stratification of space. Sometimes he even atomized the areas of light and shadow into small corpuscles, which was a precedent for impressionism.In his youth he was influenced by Caravaggio, to evolve later to a more diaphanous light, as shown in his two paintings of the Villa Medici, in which light filters through the trees. Throughout his career he achieved a great mastery in capturing a type of light of atmospheric origin, of the irradiation of light and chromatic vibration, with a fluid technique that pointed to the forms rather than defining them, thus achieving a dematerialized but truthful vision of reality, a reality that transcends matter and is framed in the world of ideas. After the smoothly executed tenebrism and precise drawing of his first period in Seville (Vieja friendo huevos, 1618, National Gallery of Scotland, Edinburgh; El aguador de Sevilla, 1620, Apsley House, London), his arrival at the Madrid court marked a stylistic change influenced by Rubens and the Venetian school – whose work he was able to study in the royal collections – with looser brushstrokes and soft volumes, while maintaining a realistic tone derived from his youthful period. Finally, after his trip to Italy between 1629 and 1631, he reached his definitive style, in which he synthesized the multiple influences received, with a fluid technique of pasty brushstrokes and great chromatic richness, as can be seen in La fragua de Vulcano (1631, Museo del Prado, Madrid). The Surrender of Breda (1635, Museo del Prado, Madrid) was a first milestone in his mastery of atmospheric light, where color and luminosity achieve an accentuated protagonism. In works such as Pablo de Valladolid (1633, Museo del Prado, Madrid), he managed to define the space without any geometric reference, only with lights and shadows. The Sevillian artist was a master at recreating the atmosphere of enclosed spaces, as shown in Las Meninas (1656, Museo del Prado, Madrid), where he placed several spotlights: the light that enters through the window and illuminates the figures of the Infanta and her ladies-in-waiting, the light from the rear window that shines around the lamp hanger and the light that enters through the door in the background. In this work he constructed a plausible space by defining or diluting the forms according to the use of light and the nuance of color, in a display of technical virtuosity that has led to the consideration of the canvas as one of the masterpieces in the history of painting. In a similar way, he succeeded in structuring space and forms by means of light planes in Las hilanderas (1657, Museo del Prado, Madrid). Another outstanding Spanish Baroque painter was Bartolomé Esteban Murillo, one of whose favorite themes was the Immaculate Conception, of which he produced several versions, generally with the figure of the Virgin within an atmosphere of golden light symbolizing divinity. He generally used translucent colors applied in thin layers, with an almost watercolor appearance, a procedure that denotes the influence of Venetian painting. After a youthful period of tenebrist influence, in his mature work he rejected chiaroscuro dramatism and developed a serene luminosity that was shown in all its splendor in his characteristic breaks of glory, of rich chromaticism and soft luminosity. The last period of this style was the so-called "full Baroque" (second half of the 17th and early 18th centuries), a decorative style in which the illusionist, theatrical and scenographic character of Baroque painting was intensified, with a predominance of mural painting – especially on ceilings – in which Pietro da Cortona, Andrea Pozzo, Giovanni Battista Gaulli (il Baciccio), Luca Giordano and Charles Le Brun stood out. In works such as the ceiling of the church of the Gesù, by Gaulli, or the Palazzo Barberini, in Cortona, is "where the ability to combine extreme light and darkness in a painting was pushed to the limit," according to John Gage, to which he adds that "the Baroque decorator not only introduced into painting the contrasts between extreme darkness and extreme light, but also a careful gradation between the two." Andrea Pozzo's Glory of Saint Ignatius of Loyola (1691-1694), on the ceiling of the church of Saint Ignatius in Rome, a scene full of heavenly light in which Christ sends a ray of light into the heart of the saint, who in turn deflects it into four beams of light directed towards the four continents, is noteworthy. In Spain, Francisco de Herrera el Mozo, Juan Carreño de Miranda, Claudio Coello and Francisco Ricci were exponents of this style. 18th Century The 18th century was nicknamed the "Age of Enlightenment", as it was the period in which the Enlightenment emerged, a philosophical movement that defended reason and science against religious dogmatism. Art oscillated between the late Baroque exuberance of Rococo and neoclassicist sobriety, between artifice and naturalism. A certain autonomy of the artistic act began to take place: art moved away from religion and the representation of power to be a faithful reflection of the artist's will, and focused more on the sensitive qualities of the work than on its meaning. In this century most national art academies were created, institutions in charge of preserving art as a cultural phenomenon, of regulating its study and conservation, and of promoting it through exhibitions and competitions; originally, they also served as training centers for artists, although over time they lost this function, which was transferred to private institutions. After the Académie Royal d'Art, founded in Paris in 1648, this century saw the creation of the Royal Academy of Fine Arts of San Fernando in Madrid (1744), the Russian Academy of Arts in Saint Petersburg (1757), the Royal Academy of Arts in London (1768), etc. The art academies favored a classical and canonical style – academicism – often criticized for its conservatism, especially by the avant-garde movements that emerged between the 19th and 20th centuries. During this period, when the science was gaining greater interest for scholars and the general public, numerous studies of optics were carried out. In particular, the study of shadows was deepened and scynography emerged as the science that studies the perspective and two-dimensional representation of the forms produced by shadows. Claude-Nicolas Lecat wrote in 1767: "the art of drawing proves that the mere gradation of the shadow, its distributions and its nuances with simple light, suffice to form the images of all objects". In the entry on shadow in L'Encyclopédie, the great project of Diderot and d'Alembert, he differentiates between several types of shadows: "inherent", the object itself; "cast", that which is projected onto another surface; "projected", that resulting from the interposition of a solid between a surface and the light source; "tilted shading", when the angle is on the vertical axis; "tilted shading", when it is on the horizontal axis. It also coded light sources as "point", "ambient light" and "extensive", the former producing shadows with clipped edges, the ambient light producing no shadow and the extensive producing shadows with little clipping divided into two areas: "umbra", the darkened part of the area where the light source is located; and "penumbra", the darkened part of the edge of a single proportion of the light area. Several treatises on painting were also written in this century that studied in depth the representation of light and shadow, such as those by Claude-Henri Watelet (L'Art de peindre, poème, avec des réflexions sur les différentes parties de la peinture, 1760) and Francesco Algarotti (Saggio sopra la pittura, 1764). Pierre-Henri de Valenciennes (Élémens de perspective pratique, a l'usage des artistes, suivis de réflexions et conseils à un élève sur la peinture, et particulièrement sur le genre du paysage, 1799) made several studies on the rendering of light at various times of the day, and recorded the various factors affecting the different types of light in the atmosphere, from the rotation of the Earth to the degree of humidity in the environment and the various reflective characteristics of a particular place. He advised his students to paint the same landscape at different times of the day and especially recommended four distinctive moments of the day: morning, characterized by freshness; noon, with its blinding sun; twilight and its fiery horizon; and night with the placid effects of moonlight. Acisclo Antonio Palomino, in El Museo Pictórico y Escala Óptica (1715-1724), stated that light is "the soul and life of everything visible" and that "it is in painting that gives such an extension to sight that it not only sees the physical and real but also the apparent and feigned, persuading bodies, distances and bulks with the elegant arrangement of light and dark, shadows and lights". Rococo meant the survival of the main artistic manifestations of the Baroque, with a more emphasized sense of decoration and ornamental taste, which were taken to a paroxysm of richness, sophistication and elegance. Rococo painting had a special reference in France, in the court scenes of Jean-Antoine Watteau, François Boucher and Jean-Honoré Fragonard. Rococo painters preferred illuminated scenes in broad daylight or colorful sunrises and sunsets. Watteau was the painter of the fête galante, of court scenes set in bucolic landscapes, a type of shady landscape of Flemish heritage. Boucher, an admirer of Correggio, specialized in the female nude, with a soft and delicate style in which the light emphasizes the placidity of the scenes, generally mythological. Fragonard had a sentimental style of free technique, with which he elaborated gallant scenes of a certain frivolity. In the still life genre Jean-Baptiste-Siméon Chardin stood out, a virtuoso in the creation of atmospheres and light effects on objects and surfaces, generally with a soft and warm light achieved through glazes and fading, with which he achieved intimate atmospheres of deep shadows and soft gradients. In this century, one of the movements most concerned with the effects of light was Venetian vedutismo, a genre of urban views that meticulously depicted the canals, monuments and places most typical of Venice, alone or with the presence of the human figure, generally of small size and in large groups of people. The veduta is usually composed of wide perspectives, with a distribution of the elements close to the scenography and with a careful use of light, which collects all the tradition of atmospheric representation from the sfumato of Leonardo and the chromatic ranges of sunrises and sunsets of Claude Lorrain. Canaletto's work stands out, whose sublime landscapes of the Adriatic villa captured with great precision the atmosphere of the city suspended over the water. The great precision and detail of his works was due in large part to the use of the camera obscura, a forerunner of photography. Another outstanding representative was Francesco Guardi, interested in the sizzling effects of light on the water and the Venetian atmosphere, with a light touch technique that was a precursor of impressionism. The landscape genre continued with the naturalistic experimentation begun in the Baroque in the Netherlands. Another reference was Claude Lorrain, whose influence was especially felt in England. The 18th century landscape incorporated the aesthetic concepts of the picturesque and the sublime, which gave the genre greater autonomy. One of the first exponents was the French painter Michel-Ange Houasse, who settled in Spain and initiated a new way of understanding the role of light in the landscape: in addition to illuminating it, light "constructs" the landscape, configures it and gives it consistency, and determines the vision of the work, since the variation of factors involved implies a specific and particular point of view. Claude Joseph Vernet specialized in seascapes, often painted in nocturnal environments by moonlight. He was influenced by Claude Lorrain and Salvator Rosa, from whom he inherited the concept of an idealized and sentimental landscape. The same type of landscape was developed by Hubert Robert, with a greater interest in picturesqueness, as evidenced by his interest in ruins, which serve as the setting for many of his works. Landscape painting was also prominent in England, where the influence of Claude of Lorraine was felt to such an extent that it largely determined the planimetry of the English garden. Here there was a great love for gardens, so that landscape painting was quite sought after, unlike on the continent, where it was considered a minor genre. In this period many painters and watercolorists emerged who dedicated themselves to the transcription of the English landscape, where they captured a new sensibility towards the luminous and atmospheric effects of nature. In this type of work the main artistic value was the capture of the atmosphere and the clients valued above all a vision comparable to the contemplation of a real landscape. Prominent artists were: Richard Wilson, Alexander Cozens, John Robert Cozens, Robert Salmon, Samuel Scott, Francis Towne and Thomas Gainsborough. One of the 18th century painters most concerned with light was Joseph Wright of Derby, who was interested in the effects of artificial light, which he masterfully captured. He spent some formative years in Italy, where he was interested in the effects of fireworks in the sky and painted the eruptions of Vesuvius. One of his masterpieces is Experiment with a Bird in an Air Pump (1768, The National Gallery, London), where he places a powerful light source in the center that illuminates all the characters, perhaps a metaphor for the Enlightenment light that illuminates all human beings equally. The light comes from a candle hidden behind the glass jar used to perform the experiment, whose shadow is placed next to a skull, both symbols of the transience of life, often used in vanitas. Wright made several paintings with artificial lighting, which he called candle light pictures, generally with violent contrasts of light and shadow. In addition – and especially in his paintings of scientific subjects, such as the one mentioned above or A Philosopher Gives a Lesson on the Table Planetarium (1766, Derby Museum and Art Gallery, Derby) – light symbolizes reason and knowledge, in keeping with the Enlightenment, the "Age of Enlightenment". In the transition between the 18th and 19th centuries, one of the most outstanding artists was Francisco de Goya, who evolved from a more or less rococo style to a certain prerromanticism, but with a personal and expressive work with a strong intimate tone. Numerous scholars of his work have emphasized Goya's metaphorical use of light as the conqueror of darkness. For Goya, light represented reason, knowledge and freedom, as opposed to the ignorance, repression and superstition associated with darkness. He also said that in painting he saw "only illuminated bodies and bodies that are not, planes that advance and planes that recede, reliefs and depths". The artist himself painted a self-portrait of himself in his studio against the light of a large window that fills the room with light, but as if that were not enough, he is wearing lighted candles in his hat (Autorretrato en el taller, 1793–1795, Real Academia de Bellas Artes de San Fernando, Madrid). At the same time, he felt a special predilection for nocturnal atmospheres and in many of his works he took up a tradition that began with Caravaggist tenebrism and reinterpreted it in a personal way. According to Jeannine Baticle, "Goya is the faithful heir of the great Spanish pictorial tradition. In him, shadow and light create powerful volumes built in the impasto, clarified with brief luminous strokes in which the subtlety of the colors produces infinite variations". Among his first production, in which he was mainly in charge of the elaboration of cartoons for the Royal Tapestry Factory of Santa Barbara, El quitasol (1777, Museo del Prado, Madrid) stands out for its luminosity, which follows the popular and traditional tastes in fashion at the court at that time, where a boy shades a young woman with a parasol, with an intense chromatic contrast between the bluish and golden tones of the light reflection. Other outstanding works for their atmospheric light effects are La nevada (1786, Museo del Prado, Madrid) and La pradera de San Isidro (1788, Museo del Prado, Madrid). As a painter of the king's chamber, his collective portrait La familia de Carlos IV (1800, Museo del Prado, Madrid) stands out, in which he seems to give a protocol order to the illumination, from the most powerful one centered on the kings in the central part, passing through the dimmer of the rest of the family to the penumbra in which the artist himself is portrayed in the left corner. Of his mature work, Los fusilamientos del 3 de mayo de 1808 en la Moncloa (1814, Museo del Prado, Madrid) stands out, where he places the light source in a beacon located in the lower part of the painting, although it is his reflection in the white shirt of one of the executed men that becomes the most powerful focus of light, extolling his figure as a symbol of the innocent victim in the face of barbarism. The choice of night is a clearly symbolic factor, since it is related to death, a fact accentuated by the Christological appearance of the character with his arms raised. Albert Boime wrote about this work (Historia social del arte): Among his last works is The Milkmaid of Bordeaux (1828, Museo del Prado, Madrid), where light is captured only with color, with a fluffy brushstroke that emphasizes the tonal values, a technique that points to impressionism. Also between the two centuries, neoclassicism developed in France after the French Revolution, a style that favored the resurgence of classical forms, purer and more austere, as opposed to the ornamental excesses of the Baroque and Rococo. The discovery of the ruins of Pompeii and Herculaneum helped to make Greco-Latin culture and an aesthetic ideology that advocated the perfection of classical forms as an ideal of beauty fashionable, which generated a myth about the perfection of classical beauty that still conditions the perception of art today. Neoclassical painting maintained an austere and balanced style, influenced by Greco-Roman sculpture or figures such as Raphael and Poussin. Jacques-Louis David, as well as François Gérard, Antoine-Jean Gros, Pierre-Paul Prud'hon, Anne-Louis Girodet-Trioson, Jean Auguste Dominique Ingres, Anton Raphael Mengs and José de Madrazo stood out. Neoclassicism replaced the dramatic illumination of the Baroque with the restraint and moderation of classicism, with cold tones and a preponderance of drawing over color, and gave special importance to line and contour. Neoclassical images put the idea before the feeling, the truthful description of reality before the imaginative whims of the Baroque artist. Neoclassicism is a clear, cold and diffuse light, which bathes the scenes with uniformity, without violent contrasts; even so, chiaroscuro was sometimes used, intensely illuminating figures or certain objects in contrast with the darkness of the background. The light delimits the contours and space, and generally gives an appearance of solemnity to the image, in keeping with the subjects treated, usually history, mythological and portrait paintings. The initiator of this style was Jacques-Louis David, a sober artist who completely subordinated color to drawing. He meticulously studied the light composition of his works, as can be seen in The Oath at the Jeu de Paume (1791, Musée National du Château de Versailles) and The Rape of the Sabine Women (1794-1799, Musée du Louvre, Paris). In The Death of Marat (1793, Royal Museums of Fine Arts of Belgium, Brussels) he developed a play of light that shows the influence of Caravaggio. Anne-Louis Girodet-Trioson followed David's style, although his emotivism brought him closer to pre-Romanticism. He was interested in chromaticism and the concentration of light and shadow, as glimpsed in The Dream of Endymion (1791, Musée du Louvre, Paris) and The Burial of Atala (1808, Musée du Louvre, Paris). Jean Auguste Dominique Ingres was a prolific author always faithful to classicism, to the point of being considered the champion of academic painting against 19th century romanticism. He was especially devoted to portraits and nudes, which stand out for their purity of lines, their marked contours and a chromatism close to enamel. Pierre-Paul Prud'hon assumed neoclassicism with a certain rococo influence, with a predilection for feminine voluptuousness inherited from Boucher and Watteau, while his work shows a strong influence of Correggio. In his mythological paintings populated by nymphs, he showed a preference for twilight and lunar light, a dim and faint light that delicately bathes the female forms, whose white skin seems to glow. Landscape painting was considered a minor genre by the neoclassicals. Even so, it had several outstanding exponents, especially in Germany, where Joseph Anton Koch, Ferdinand Kobell and Wilhelm von Kobell are worth mentioning. The former focused on the Alpine mountains, where he succeeded in capturing the cloudy atmosphere of the high mountains and the effects of sparkling light on the plant and water surfaces. He usually incorporated the human presence, sometimes with some thematic pretext of a historical or literary type – such as Shakespeare's plays or the Ossian cycle. The light in his paintings is generally clear and cold, natural, without too much stridency. If Koch represented a type of idealistic landscape, heir to Poussin or Lorraine, Ferdinand Kobell represents the realistic landscape, indebted to the Dutch Baroque landscape. His landscapes of valleys and plains with mountainous backgrounds are bathed in a translucent light, with intense contrasts between the various planes of the image. His son Wilhelm followed his style, with a greater concern for light, which is denoted in his clear environments of cold light and elongated shadows, which gives his figures a hard consistency and metallic appearance. Contemporary Art 19th Century In the 19th century began an evolutionary dynamic of styles that followed one another chronologically with increasing speed and modern art emerged as opposed to academic art, where the artist is at the forefront of the cultural evolution of humanity. The study of light was enriched with the appearance of photography and with new technological advances in artificial light, thanks to the appearance of gaslight at the beginning of the century, kerosene in the middle of the century and electricity at the end of the century. These two phenomena brought about a new awareness of light, as this element configures the visual appearance, changing the concept of reality from the tangible to the perceptible. Romanticism The first style of the century was Romanticism, a movement of profound renewal in all artistic genres, which paid special attention to the field of spirituality, fantasy, sentiment, love of nature, along with a darker element of irrationality, attraction to the occult, madness, dreams. Popular culture, the exotic, the return to underrated artistic forms of the past – especially medieval ones – were especially valued, and the landscape gained notoriety, which became a protagonist in its own right. The Romantics had the idea of an art that arose spontaneously from the individual, emphasizing the figure of the "genius": art is the expression of the artist's emotions. The Romantics used a more expressive technique with respect to neoclassical restraint, modeling the forms by means of impasto and glazes, in such a way that the expressiveness of the artist is released. In a certain pre-Romanticism we can place William Blake, an original writer and artist, difficult to classify, who devoted himself especially to illustration, in the manner of the ancient illuminators of codices. Most of Blake's images are set in a nocturnal world, in which light emphasizes certain parts of the image, a light of dawn or twilight, almost "liquid", unreal. Between neoclassicism and romanticism was also Johann Heinrich Füssli, author of dreamlike images in a style influenced by Italian mannerism, in which he used strong contrasts of light and shadow, with lighting of theatrical character, like candlesticks. One of the pioneers of Romanticism was the prematurely deceased Frenchman Théodore Géricault, whose masterpiece, The Raft of the Medusa (1819, Musée du Louvre, Paris), presents a ray of light emerging from the stormy clouds in the background as a symbol of hope. The most prominent member of the movement in France was Eugène Delacroix, a painter influenced by Rubens and the Venetian school, who conceived of painting as a medium in which patches of light and color are related. He was also influenced by John Constable, whose painting The Hay Wain opened his eyes to a new sensitivity to light. In 1832 he traveled to Morocco, where he developed a new style that could be considered proto-impressionist, characterized by the use of white to highlight light effects, with a rapid execution technique. In the field of landscape painting, John Constable and Joseph Mallord William Turner stood out, heirs of the rich tradition of English landscape painting of the 18th century. Constable was a pioneer in capturing atmospheric phenomena. Kenneth Clark, in The Art of Landscape, credited him with the invention of the "chiaroscuro of nature", which would be expressed in two ways: on the one hand, the contrast of light and shade that for Constable would be essential in any landscape painting and, on the other, the sparkling effects of dew and breeze that the British painter was able to capture so masterfully on his canvases, with a technique of interrupted strokes and touches of pure white made with a palette knife. Constable once said that "the form of an object is indifferent; light, shadow and perspective will always make it beautiful". Joseph Mallord William Turner was a painter with a great intuition to capture the effects of light in nature, with environments that combine luminosity with atmospheric effects of great drama, as seen in Hannibal Crossing the Alps (1812, Tate Gallery, London). Turner had a predilection for violent atmospheric phenomena, such as storms, tidal waves, fog, rain, snow, or fire and spectacles of destruction, in landscapes in which he made numerous experiments on chromaticism and luminosity, which gave his works an aspect of great visual realism. His technique was based on a colored light that dissolved the forms in a space-color-light relationship that give his work an appearance of great modernity. According to Kenneth Clark, Turner "was the one who raised the key of color so that his paintings not only represented light, but also symbolized the nature of light". His early works still had a certain classical component, in which he imitated the style of artists such as Claude Lorrain, Richard Wilson, Adriaen van de Velde or Aelbert Cuyp. They are works in which he still represents light by means of contrast, executed in oil; however, his watercolors already pointed to what would be his mature style, characterized by the rendering of color and light in movement, with a clear tonality achieved with a primary application of a film of mother-of-pearl paint. In 1819 he visited Italy, whose light inspired him and induced him to elaborate images where the forms were diluted in a misty luminosity, with pearly moonscapes and shades of yellow or scarlet. He then devoted himself to his most characteristic images, mainly coastal scenes in which he made a profound study of atmospheric phenomena. In Interior at Petworth (1830, British Museum, London) the basis of his design is already light and color, the rest is subordinated to these values. In his later works Clark states that "Turner's imagination was capable of distilling, from light and color, poetry as delicate as Shelley's." Among his works are: San Giorgio Maggiore: At Dawn (1819, Tate Gallery), Regulus (1828, Tate Gallery), The Burning of the Houses of Lords and Commons (1835, Philadelphia Museum of Art), The Last Voyage of the "Daredevil" (1839, National Gallery), Negreros throwing the Dead and Dying Overboard (1840, Museum of Fine Arts, Boston), Twilight over a Lake (1840, Tate Gallery), Rain, Steam and Speed (1844, National Gallery), etc. Mention should also be made of Richard Parkes Bonington, a prematurely deceased artist, primarily a watercolorist and lithographer, who lived most of his time in Paris. He had a light, clear and spontaneous style. His landscapes denote the same atmospheric sensibility of Constable and Turner, with a great delicacy in the treatment of light and color, to the point that he is considered a precursor of impressionism. In Germany the figure of Caspar David Friedrich stands out, a painter with a pantheistic and poetic vision of nature, an uncorrupted and idealized nature where the human figure only represents the role of a spectator of the grandeur and infinity of nature. From his beginnings, Friedrich developed a style marked by sure contours and subtle play of light and shadow, in watercolor, oil or sepia ink. One of his first outstanding works is The Cross on the Mountain (1808, Gemäldegalerie Neue Meister, Dresden), where a cross with Christ crucified stands on a pyramid of rocks against the light, in front of a sky furrowed with clouds and crossed by five beams of light that emerge from an invisible sun that is intuited behind the mountain, without it being clear whether it is the sunrise or the sunset; One of the beams generates reflections on the crucifix, so it is understood that it is a metal sculpture. During his early years he focused on landscapes and seascapes, with warm sunrise and sunset lights, although he also experimented with the effects of winter, stormy and foggy lights. A more mature work is Memorial Image for Johann Emanuel Bremer (1817, Alte Nationalgalerie, Berlin), a night scene with a strong symbolic content alluding to death: in the foreground appears a garden in twilight, with a fence through which the rays of the moon filter; the background, with a faint light of dawn, represents the afterlife. In Woman at Sunrise (1818-1820, Folkwang Museum, Essen) – also called Woman at Sunset, since the time of day is not known with certainty – he showed one of his characteristic compositions, that of a human figure in front of the immensity of nature, a faithful reflection of the romantic feeling of the sublime, with a sky of a reddish yellow of great intensity; it is usually interpreted as an allegory of life as a permanent Holy Communion, a kind of religious communion devised by August Wilhelm von Schlegel. Between 1820 and 1822 he painted several landscapes in which he captured the variation of light at different times of the day: Morning, Noon, Afternoon and Sunset, all of them in the Niedersächsisches Landesmuseum in Hannover. For Friedrich, dawn and dusk symbolized birth and death, the cycle of life. In Sea with Sunrise (1826, Hamburger Kunsthalle, Hamburg) he reduced the composition to a minimum, playing with light and color to create an image of great intensity, inspired by the engravings of the 16th and 17th centuries that recreated the appearance of light on the first day of Creation. One of his last works was The Ages of Life (1835, Museum der bildenden Künste, Leipzig), where the five characters are related to the five boats at different distances from the horizon, symbolizing the ages of life. Other outstanding works of his are: Abbey in the Oak Grove (1809, Alte Nationalgalerie, Berlin), Rainbow in a Mountain Landscape (1809-1810, Folkwang Museum, Essen), View of a Harbor (1815-1816, Charlottenburg Palace, Berlin), The Wayfarer on the Sea of Clouds (1818, Hamburger Kunsthalle, Hamburg), Moonrise on the Seaside (1821, Hermitage Museum, Saint Petersburg), Sunset on the Baltic Sea (1831, Gemäldegalerie Neue Meister, Dresden), The Great Reservoir (1832, Gemäldegalerie Neue Meister, Dresden), etc. The Norwegian Johan Christian Dahl moved in the wake of Friedrich, although with a greater interest in light and atmospheric effects, which he captured in a naturalistic way, thus moving away from the romantic landscape. In his works he shows a special interest in the sky and clouds, as well as misty and moonlit landscapes. In many of his works the sky occupies almost the entire canvas, leaving only a narrow strip of land occupied by a solitary tree. Georg Friedrich Kersting made a transposition of Friedrich's pantheistic mysticism to interior scenes, illuminated by a soft light of lamps or candles that gently illuminate the domestic environments that he used to represent, giving these scenes an appearance that transcends reality to become solemn images with a certain mysterious air. Philipp Otto Runge developed his own theory of color, according to which he differentiated between opaque and transparent colors according to whether they tended to light or darkness. In his work this distinction served to highlight the figures in the foreground from the background of the scene, which was usually translucent, generating a psychological effect of transition between planes. This served to intensify the allegorical sense of his works, since his main objective was to show the mystical character of nature. Runge was a virtuoso in capturing the subtle effects of light, a mysterious light that has its roots in Altdorfer and Grünewald, as in his portraits illuminated from below with magical reflections that illuminate the character as if immersed in a halo. The Nazarene movement also emerged in Germany, a series of painters who between 1810 and 1830 adopted a style that was supposedly old-fashioned, inspired by Renaissance classicism – mainly Fra Angelico, Perugino and Raphael – and with an accentuated religious sense. The Nazarene style was eclectic, with a preponderance of drawing over color and a diaphanous luminosity, with limitation or even rejection of chiaroscuro. Its main representatives were: Johann Friedrich Overbeck, Peter von Cornelius, Julius Schnorr von Carolsfeld and Franz Pforr. Also in Germany and the Austro-Hungarian Empire there was the Biedermeier style, a more naturalistic tendency halfway between romanticism and realism. One of its main representatives was Ferdinand Georg Waldmüller, an advocate of the study of nature as the only goal of painting. His paintings are brimming with a resplendent clarity, a meticulously elaborated light of almost palpable quality, as an element that builds the reality of the painting, combined with well-defined shadows. Other artists of interest in this trend are Johann Erdmann Hummel, Carl Blechen, Carl Spitzweg and Moritz von Schwind. Hummel used light as a stylizing element, with a special interest in unusual light phenomena, from artificial light to glints and reflections. Blechen evolved from a typical romanticism with a heroic and fantastic tone to a naturalism that was characterized by light after a year's stay in Italy. Blechen's light is summery, a bright light that accentuates the volume of objects by giving them a tactile substance, combined with a skillful use of color. Spitzweg incorporated camera obscura effects into his paintings, in which light, whether sunlight or moonlight, appears in the form of beams that create effects that are sometimes unreal but of great visual impact. Schwind was the creator of a diaphanous and lyrical light, captured in resplendent luminous spaces with subtle tonal gradations in the reflections. Lastly, we should mention the Danish Christen Købke, author of landscapes of a delicate light reminiscent of the Pointillé of Vermeer or the luminosity of Gerrit Berckheyde. In Italy in the 1830s the so-called Posillipo School, a group of anti-academic Neapolitan landscape painters, among whom Giacinto Gigante, Filippo Palizzi and Domenico Morelli stood out. These artists showed a new concern for light in the landscape, with a more truthful aspect, far from the classical canons, in which the shimmering effects gain prominence. Inspired by Vedutism and picturesque painting, as well as by the work of what they considered their direct master, Anton Sminck van Pitloo, they used to paint from life, in compositions in which the chromatism stands out without losing the solidity of the drawing. Realism Romanticism was succeeded by realism, a trend that emphasized reality, the description of the surrounding world, especially of workers and peasants in the new framework of the industrial era, with a certain component of social denunciation, linked to political movements such as utopian socialism. These artists moved away from the usual historical, religious or mythological themes to deal with more mundane themes of modern life. One of the realist painters most concerned with light was Jean-François Millet, influenced by Baroque and Romantic landscape painting, especially Caspar David Friedrich. He specialized in peasant scenes, often in landscapes set at dawn and dusk, as in On the Way to Work (1851, private collection), Shepherdess Watching Her Flock (1863, Musée d'Orsay, Paris) or A Norman Milkmaid at Gréville (1871, Los Angeles County Museum of Art). For the composition of his works he often used wax or clay figurines that he moved around to study the effects of light and volume. His technique was dense and vigorous brushwork, with strong contrasts of light and shadow. His masterpiece is The Angelus (1857, Musée d'Orsay, Paris): the evening setting of this work allows its author to emphasize the dramatic aspect of the scene, translated pictorially in non-contrasting tonalities, with the darkened figures standing out against the brightness of the sky, which increases its volumetry and accentuates its outline, resulting in an emotional vision that emphasizes the social message that the artist wants to convey. One of his last works was Bird Hunters (1874, Philadelphia Museum of Art), a nocturnal setting in which some peasants dazzle birds with a torch to hunt them, in which the luminosity of the torch stands out, achieved with a dense application of the pictorial impasto. The champion of realism was Gustave Courbet, who in his training was nourished by Flemish, Dutch and Venetian painting of the 16th and 17th centuries, especially Rembrandt. His early works are still of romantic inspiration, in which he uses a dramatic light tone borrowed from the Flemish-Dutch tradition but reinterpreted with a more modern sensibility. His mature work, now fully realistic, shows the influence of the Le Nain brothers, and is characterized by large, meticulously worked works, with large shiny surfaces and a dense application of pigment, often done with a palette knife. At the end of his career he devoted himself more to landscape and nudes, which stand out for their luminous sensibility. Another reference was Honoré Daumier, painter, lithographer, and caricaturist with a strong satirical tone, loose and free stroke, with an effective use of chiaroscuro. In his paintings he was inspired by the light contrasts of Goya, giving his works little colorism and giving greater emphasis to light (The Fugitives, 1850; Barabbas, 1850; The Butcher, 1857; The Third Wagon, 1862). Linked to realism was the French landscape school of Barbizon (Camille Corot, Théodore Rousseau, Charles-François Daubigny, Narcisse-Virgile Díaz de la Peña), marked by a pantheistic feeling of nature, with concern for the effects of light in the landscape, such as the light that filters through the branches of trees. The most outstanding was Corot, who discovered light in Italy, where he dedicated himself to painting outdoors Roman landscapes captured at different times of the day, in scenes of clean atmospheres in which he applied to the surfaces of the volumes the precise doses of light to achieve a panoramic vision in which the volumes are cut out in the atmosphere. Corot had a predilection for a type of tremulous light that reflected on the water or filtered through the branches of the trees, with which he found a formula that satisfied him while achieving great popularity among the public. Eugène Boudin, one of the first landscape painters to paint outdoors, especially seascapes, also stood out as an independent artist. He achieved great mastery in the elaboration of skies, shimmering and slightly misty skies of dim and transparent light, a light that is also reflected in the water with instantaneous effects that he knew how to capture with spontaneity and precision, with a fast technique that already pointed to impressionism – in fact, he was Monet's teacher. Naturalistic landscape painting had another outstanding representative in Germany, Adolph von Menzel, who was influenced by Constable and developed a style in which light is decisive for the visual aspect of his works, with a technique that was a precursor of impressionism. Also noteworthy are his interior scenes with artificial light, in which he recreates a multitude of anecdotal details and luminous effects of all kinds, as in his Dinner after the Ball (1878, Alte Nationalgalerie, Berlin). Next to him stands out Hans Thoma, who was influenced by Courbet, who in his works combined the social vindication of realism with a still somewhat romantic feeling of the landscape. Thoma was an exponent of a "lyrical realism", with landscapes and paintings of peasant themes, usually set in his native Black Forest, characterized by the use of a silver-toned light. In the Netherlands there was the figure of Johan Barthold Jongkind, considered a pre-impressionist, whom Monet also considered his master. He was a great interpreter of atmospheric phenomena and of the play of light on water and snow, as well as of winter and night lights – his moonlit landscapes were highly valued. In Spain, Carlos de Haes, Agustín Riancho and Joaquín Vayreda deserve to be mentioned. Haes, of Belgian origin, traveled the entire Spanish geography to capture its landscapes, which he captured with an almost topographical detail. Riancho had a predilection for mountain scenery, with a coloring with a certain tendency to dark shades, free and spontaneous. Vayreda was the founder of the so-called Olot School. Influenced by the Barbizon School, he applied this style to the Girona landscape, with works of diaphanous and serene composition with a certain lyrical component of bucolic evocation. Also in Spain it is worth mentioning the work of Mariano Fortuny, who found his personal style in Morocco as a chronicler of the African War (1859-1860), where he discovered the colorfulness and exoticism that would characterize his work. Here he began to paint with quick sketches of luminous touches, with which he captured the action in a spontaneous and vigorous way, and which would be the basis of his style: a vibrantly executed colorism with flashing light effects, as is denoted in one of his masterpieces, La vicaría (1868-1870, Museo Nacional de Arte de Cataluña, Barcelona). Another landscape school was the Italian school of the Macchiaioli (Silvestro Lega, Giovanni Fattori, Telemaco Signorini), of anti-academic style, characterized by the use of stains (macchia in Italian, hence the name of the group) of color and unfinished forms, sketched, a movement that preceded Impressionism. These artists painted from life and had as their main objective the reduction of painting to contrasts of light and brilliance. According to Diego Martelli, one of the theorists of the group, "we affirmed that form did not exist and that, just as in light everything results from color and chiaroscuro, so it is a matter of obtaining tones, the effects of the true". The Manchists revalued the light contrasts and knew how to transcribe in their canvases the power and clarity of the Mediterranean light. They captured like no one else the effects of the sun on objects and landscapes, as in the painting The Patrol by Giovanni Fattori, in which the artist uses a white wall as a luminous screen on which the figures are cut out. In Great Britain, the school of the Pre-Raphaelites emerged, who were inspired – as their name indicates – by Italian painters before Raphael, as well as by the recently emerged photography, with exponents such as Dante Gabriel Rossetti, Edward Burne-Jones, John Everett Millais, William Holman Hunt and Ford Madox Brown. The Pre-Raphaelites sought a realistic vision of the world, based on images of great detail, vivid colors and brilliant workmanship; as opposed to the side lighting advocated by academicist painting, they preferred general lighting, which turned paintings into flat images, without great contrasts of light and shadow. To achieve maximum realism, they carried out numerous investigations, as in the painting The Rescuer (1855, National Gallery of Victoria, Melbourne), by John Everett Millais, in which a fireman saves two girls from a fire, for which the artist burned wood in his workshop to find the right lighting. The almost photographic detail of these works led John Ruskin to say of William Holman Hunt's The Wandering Sheep (1852, Tate Britain, London) that "for the first time in the history of art the absolutely faithful balance between color and shade is achieved, by which the actual brightness of the sun could be transported into a key by which possible harmonies with material pigments should produce on the mind the same impressions as are made by the light itself." Hunt was also the author of The Light of the World (1853, Keble College, Oxford University), in which light has a symbolic meaning, related to the biblical passage that identifies Christ with the phrase "I am the light of the world, he who follows me shall not walk in darkness, for he shall have the light of life" (John 8:12). This painter again portrayed the symbolic light of Jesus Christ in The Awakening of Consciousness (1853, Tate Britain), through the light of the garden streaming through the window. Romanticism and realism were the first artistic movements that rejected the official art of the time, the art taught in the academies – academicism – an art that was institutionalized and anchored in the past both in the choice of subjects and in the techniques and resources made available to the artist. In France, in the second half of the 19th century, this art was called art pompier ("fireman's art", a pejorative name derived from the fact that many authors represented classical heroes with helmets that resembled fireman's helmets). Although in principle the academies were in tune with the art produced at the time, so we can not speak of a distinct style, in the 19th century, when the evolutionary dynamics of the styles began to move away from the classical canons, academic art was constrained in a classicist style based on strict rules. Academicism was stylistically based on Greco-Roman classicism, but also on earlier classicist authors, such as Raphael, Poussin or Guido Reni. Technically, it was based on careful drawing, formal balance, perfect line, plastic purity and careful detailing, together with realistic and harmonious coloring. Many of its representatives had a special predilection for the nude as an artistic theme, as well as a special attraction for orientalism. Its main representatives were: William-Adolphe Bouguereau, Alexandre Cabanel, Eùgene-Emmanuel Amaury-Duval and Jean-Léon Gérôme. Impressionism Light played a fundamental role in impressionism, a style based on the representation of an image according to the "impression" that light produces to the eye. In contrast to academic art and its forms of representation based on linear perspective and geometry, the Impressionists sought to capture reality on the canvas as they perceived it visually, so they gave all the prominence to light and color. To this end, they used to paint outdoors (en plen air), capturing the various effects of light on the surrounding environment at different times of the day. They studied in depth the laws of optics and the physics of light and color. Their technique was based on loose brushstrokes and a combination of colors applied according to the viewer's vision, with a preponderance of contrast between elementary colors (yellow, red and blue) and their complements (orange, green and violet). In addition, they used to apply the pigment directly on the canvas, without mixing, thus achieving greater luminosity and brilliance. Impressionism perfected the capture of light by means of fragmented touches of color, a procedure that had already been used to a greater or lesser extent by artists such as Giorgione, Titian, Guardi and Velázquez (it is well known that the Impressionists admired the genius of Las Meninas, whom they considered "the painter of painters"). For the Impressionists, light was the protagonist of the painting, so they began to paint from life, capturing at all times the variations of light on landscapes and objects, the fleeting "impression" of light at different times of the day, so they often produced series of paintings of the same place at different times. For this they dispensed with drawing and defined form and volume directly with color, in loose brushstrokes of pure tones, juxtaposed with each other. They also abandoned chiaroscuro and violent contrasts of light and shadow, for which they dispensed with colors such as black, gray or brown: the chromatic research of impressionism led to the discarding of black in painting, since they claimed that it is a color that does not exist in nature. From there they began to use a luminous range of "light on light" (white, blue, pink, red, violet), elaborating the shades with cold tones. Thus, the impressionists concluded that there is neither form nor color, the only real thing is the air-light relationship. In impressionist paintings the theme is light and its effects, beyond the anecdotal of places and characters. Impressionism was considerably influenced by research in the field of photography, which had shown that the vision of an object depends on the quantity and quality of light. Impressionist painters were especially concerned with artificial light: according to Juan Antonio Ramirez (Mass Media and Art History, 1976), "the surprise at the effect of the new phenomenon of artificial light in the street, in cafés, and in the living room, gave rise to famous paintings such as Manet's Un bar aux Folies Bergère (1882, Courtauld Gallery, London), Renoir's Dancing at the Moulin de la Galette (1876, Musée d'Orsay, Paris) and Degas' Women in a Café (1877, Musée d'Orsay, Paris). Such paintings show the lighted lanterns and that glaucous tonality that only artificial light produces". Numerous Impressionist works are set in bars, cafés, dances, theaters and other establishments, with lamps or candelabras of dim light that mixes with the smoky air of the atmosphere of these places, or candle lights in the case of theaters and opera houses. The main representatives were Claude Monet, Camille Pissarro, Alfred Sisley, Pierre-Auguste Renoir, and Edgar Degas, with an antecedent in Édouard Manet. The most strictly Impressionist painters were Monet, Sisley and Pissarro, the most concerned with capturing light in the landscape. Monet was a master in capturing atmospheric phenomena and the vibration of light on water and objects, with a technique of short brushstrokes of pure colors. He produced the greatest number of series of the same landscape at different times of the day, to capture all the nuances and subtle differences of each type of light, as in his series of The Station of Saint-Lazare, Haystacks, The Poplars, The Cathedral of Rouen, The Parliament of London, San Giorgio Maggiore or Water Lilies. His last works in Giverny on water lilies are close to abstraction, in which he achieves an unparalleled synthesis of light and color. In the mid-1880s he painted coastal scenes of the French Riviera with the highest degree of luminous intensity ever achieved in painting, in which the forms dissolve in pure incandescence and whose only subject is already the sensation of light. Sisley also showed a great interest in the changing effects of light in the atmosphere, with a fragmented touch similar to that of Monet. His landscapes are of great lyricism, with a predilection for aquatic themes and a certain tendency to the dissolution of form. Pissarro, on the other hand, focused more on a rustic-looking landscape painting, with a vigorous and spontaneous brushstroke that conveyed "an intimate and profound feeling for nature", as the critic Théodore Duret said of him. In addition to his countryside landscapes, he produced urban views of Paris, Rouen and Dieppe, and also produced series of paintings at various times of the day and night, such as those of the Avenue de l'Opera and the Boulevard de Montmartre. Renoir developed a more personal style, notable for its optimism and joie de vivre. He evolved from a realism of Courbetian influence to an impressionism of light and luminous colors, and shared for a time a style similar to that of Monet, with whom he spent several stays in Argenteuil. He differed from the latter especially in his greater presence of the human figure, an essential element for Renoir, as well as the use of tones such as black that were rejected by the other members of the group. He liked the play of light and shadow, which he achieved by means of small spots, and achieved great mastery in effects such as the beams of light between the branches of trees, as seen in his work Dance at the Moulin de la Galette (1876, Musée d'Orsay, Paris), and in Torso, sunlight effect where sunlight is seen on the skin of a naked girl (1875, Musée d'Orsay, Paris). Degas was an individual figure, who although he shared most of the impressionist assumptions never considered himself part of the group. Contrary to the preferences of his peers, he did not paint from life and used drawing as a compositional basis. His work was influenced by photography and Japanese prints, and from his beginnings he showed interest in night and artificial light, as he himself expressed: "I work a lot on night effects, lamps, candles, etc. The curious thing is not always to show the light source, but the effect of the light". In his series of works on dancers or horse races, he studied the effects of light in movement, in a disarticulated space in which the effects of lights and backlighting stand out. Many Impressionist works were almost exclusively about the effects of light on the landscape, which they tried to recreate as spontaneously as possible. However, this led in the 1880s to a certain reaction in which they tried to return to more classical canons of representation and a return to the figure as the basis of the composition. From then on, several styles derived from impressionism emerged, such as neo-impressionism (also called divisionism or pointillism) and post-impressionism. Neo-Impressionism took up the optical experimentation of Impressionism: the Impressionists used to blur the contours of objects by lowering the contrasts between light and shadow, which implied replacing objectual solidity with a disembodied luminosity, a process that culminated in Pointillism: in this technique there is no precise source of illumination, but each point is a light source in itself. The composition is based on juxtaposed ("divided") dots of a pure color, which merge in the eye of the viewer at a given distance. When these juxtaposed colors were complementary (red-green, yellow-violet, orange-blue) a greater luminosity was achieved. Pointillism, based largely on the theories of Michel-Eugène Chevreul (The Law of Simultaneous Contrast of Colors, 1839) and Ogden Rood (Modern Chromatics, 1879), defended the exclusive use of pure and complementary colors, applied in small brushstrokes in the form of dots that composed the image on the viewer's retina, at a certain distance. Its best exponents were Georges Seurat and Paul Signac. Seurat devoted his entire life to the search for a method that would reconcile science and aesthetics, a personal method that would transcend impressionism. His main concern was chromatic contrast, its gradation and the interaction between colors and their complementaries. He created a disc with all the tones of the rainbow united by their intermediate colors and placed the pure tones in the center, which he gradually lightened towards the periphery, where the pure white was located, so that he could easily locate the complementary colors. This disc allowed him to mix the colors in his mind before fixing them on the palette, thus reducing the loss of chromatic intensity and luminosity. In his works he first drew in black and white to achieve the maximum balance between light and dark masses, and applied the color by tiny dots that were mixed in the retina of the viewer by optical mixing. On the other hand, he took from Charles Henry his theory on the relationship between aesthetics and physiology, how some forms or spatial directions could express pleasure and pain; according to this author, warm colors were dynamogenic and cold ones inhibitory. From 1886 he focused more on interior scenes with artificial light. His work Chahut (1889–1890, Kröller-Müller Museum, Otterlo) had a powerful influence on Cubism for its way of modeling volumes in space through light, without the need to simulate a third dimension. Signac was a disciple of Seurat, although with a freer and more spontaneous style, not so scientific, in which the brilliance of color stands out. In his last years his works evolved to a search for pure sensation, with a chromatism of expressionist tendency, while he reduced the pointillist technique to a grid of tesserae of larger sizes than the divisionist dots. In Italy there was a variant – the so-called divisionisti – who applied this technique to scenes of greater social commitment, due to its link with socialism, although with some changes in technical execution, since instead of confronting complementary colors they contrasted them in terms of rays of light, producing images that stand out for their luminosity and transparency, as in the work of Angelo Morbelli. Gaetano Previati developed a style in which luminosity is linked to symbolism related to life and nature, as in his Maternity (1890-1891, Banca Popolare di Novara), generally with a certain component of poetic evocation. Another member of the group, Vittore Grubicy de Dragon, wrote that "light is life and, if, as many rightly affirm, art is life, and light is a form of life, the divisionist technique, which tends to greatly increase the expressiveness of the canvas, can become the cradle of new aesthetic horizons for tomorrow". Post-impressionism was, rather than a homogeneous movement, a grouping of diverse artists initially trained in impressionism who later followed individual trajectories of great stylistic diversity. Its best representatives were Henri de Toulouse-Lautrec, Paul Gauguin, Paul Cézanne, and Vincent van Gogh. Cézanne established a compositional system based on geometric figures (cube, cylinder and pyramid), which would later influence Cubism. He also devised a new method of illumination, in which light is applied in the density and intensity of color, rather than in the transitional values between black and white. The one who experimented the most in the field of light was Van Gogh, author of works of strong dramatism and interior prospection, with sinuous and dense brushstrokes, of intense color, in which he deforms reality, to which he gave a dreamlike air. Van Gogh's work shows influences as disparate as those of Millet and Hiroshige, while from the Impressionist school he was particularly influenced by Renoir. Already in his early works, his interest in light is noticeable, which is why he gradually clarified his palette, until he practically reached a yellow monochrome, with a fierce and temperamental luminosity. In his early works, such as The Potato Eaters (1885, Van Gogh Museum, Amsterdam), the influence of Dutch realism, which had a tendency to chiaroscuro and dense color with thick brushstrokes, is evident; here he created a dramatic atmosphere of artificial light that emphasizes the tragedy of the miserable situation of these workers marginalized by the Industrial Revolution. Later his coloring became more intense, influenced by the divisionist technique, with a technique of superimposing brushstrokes in different tones; for the most illuminated areas he used yellow, orange and reddish tones, seeking a harmonious relationship between them all. After settling in Arles in Arles in 1888 he was fascinated by the limpid Mediterranean light and in his landscapes of that period he created clear and shining atmospheres, with hardly any chiaroscuro. As was usual in impressionism, he sometimes made several versions of the same motif at different times of the day to capture its light variations. He also continued his interest in artificial and nocturnal lights, as in Café de noche, interior (1888, Yale University Art Gallery, New Haven), where the light of the lamps seems to vibrate thanks to the concentric halo-shaped circles with which he has reflected the radiation of the light; or Café de noche, exterior (1888, Kröller-Müller Museum, Otterlo), where the luminosity of the café terrace contrasts with the darkness of the sky, where the stars seem like flowers of light. Light also plays a special role in his Sunflowers series (1888-1889), where he used all imaginable shades of yellow, which for him symbolized light and life, as he expressed in a letter to his brother Theo: "a sun, a light that, for lack of a better adjective, I can only define with yellow, a pale sulfur yellow, a pale lemon yellow". To highlight the yellow and orange, he used green and sky blue in the outlines, creating an effect of soft light intensity. In Italy during these years there was a movement called Scapigliatura (1860-1880), sometimes considered a predecessor of divisionism, characterized by its interest in the purity of color and the study of light. Artists like Tranquillo Cremona, Mosè Bianchi or Daniele Ranzoni tried to capture on canvas their feelings through chromatic vibrations and blurred contours, with characters and objects almost dematerialized. Giovanni Segantini, a personal artist who combined a drawing of academicist tradition with a post-impressionist coloring where the light effects have a great relief. Segantini's specialty was the mountain landscape, which he painted outdoors, with a technique of strong brushstrokes and simple colors, with a vibrant light that he only found in the high alpine mountains. In Germany, impressionism was represented by Fritz von Uhde, Lovis Corinth, and Max Slevogt. The first was more of a plenairist than strictly an impressionist, although more than landscape painting he devoted himself to genre painting, especially of religious themes, works in which he also showed a special sensitivity to light. Corinth had a rather eclectic career, from academic beginnings – he was a disciple of Bouguereau – through realism and impressionism, to a certain decadentism and an approach to Jugendstil, to finally end up in expressionism. Influenced by Rembrandt and Rubens, he painted portraits, landscapes and still lifes with a serene and brilliant chromatism. Slevogt assumed the fresh and brilliant chromatism of the Impressionists, although renouncing the fragmentation of colors that they made, and his technique was of loose brushstrokes and energetic movement, with bold and original light effects, which denote a certain influence of the baroque art of his native Bavaria. In Great Britain, the work of James Abbott McNeil Whistler, American by birth but established in London since 1859, stood out. His landscapes are the antithesis of the sunny French landscapes, as they recreate the foggy and taciturn English climate, with a preference for night scenes, images from which he nevertheless knows how to distill an intense lyricism, with artificial light effects reflected in the waters of the Thames. In the United States, it is worth mentioning the work of John Singer Sargent, Mary Cassatt, and Childe Hassam. Sargent was an admirer of Velázquez and Frans Hals, and excelled as a social portraitist, with a virtuoso and elegant technique, both in oil and watercolor, the latter mainly in landscapes of intense color. Cassatt lived for a long time in Paris, where he was related to the Impressionist circle, with whom he shared more the themes than the technique, and developed an intimate and sophisticated work, influenced by Japanese prints. Hassam's main motif was New York life, with a fresh but somewhat cloying style. Mention should also be made of Scandinavian impressionism, many of whose artists were trained in Paris. These painters had a special sensitivity to light, perhaps due to its absence in their native land, so they traveled to France and Italy attracted by the "light of the south". The main exponents were Peder Severin Krøyer, Akseli Gallen-Kallela, and Anders Zorn. The former showed a special interest in highly complex lighting effects, such as the mixing of natural and artificial light. Gallen-Kallela was an original artist who later approached symbolism, with a personal expressive and stylized painting with a tendency towards romanticism, with a special interest in Finnish folklore. Zorn specialized in portraits, nudes and genre scenes, with a brilliant brushstroke of vibrant luminosity. In Russia, Valentin Serov and Konstantin Korovin should be mentioned. Serov had a style similar to that of Manet or Renoir, with a taste for intense chromatism and light reflections, a bright light that extols the joy of life. Korovin painted both urban landscapes and natural landscapes in which he elevates a simple sketch of chromatic impression to the category of a work of art. In Spain, the work of Aureliano de Beruete and Darío de Regoyos stands out. Beruete was a disciple of Carlos de Haes, so he was trained in the realist landscape, but assumed the impressionist technique after a period of training in France. An admirer of Velazquez's light, he knew how to apply it to the Castilian landscape – especially the mountains of Madrid – with his own personal style. Regoyos also trained with Haes and developed an intimate style halfway between pointillism and expressionism. Luminism and symbolism From the mid-19th century until practically the transition to the 20th century, various styles emerged that placed special emphasis on the representation of light, which is why they were generically referred to as "luminism", with various national schools in the United States and various European countries or regions. The term luminism was introduced by John Ireland Howe Baur in 1954 to designate the landscape painting done in the United States between 1840 and 1880, which he defines as "a polished and meticulous realism in which there are no noticeable brushstrokes and no trace of impressionism, and in which atmospheric effects are achieved by infinitely careful gradations of tone, by the most exact study of the relative clarity of nearer and more distant objects, and by an accurate rendering of the variations of texture and color produced by direct or reflected rays". The first was American Luminism, which gave rise to a group of landscape painters generally grouped in the so-called Hudson River School, in which we can include to a greater or lesser extent Thomas Cole, Asher Brown Durand, Frederic Edwin Church, Albert Bierstadt, Martin Johnson Heade, Fitz Henry Lane, John Frederick Kensett, James Augustus Suydam, Francis Augustus Silva, Jasper Francis Cropsey and George Caleb Bingham. In general, his works were based on bombastic compositions, with a horizon line of great depth and a sky of veiled aspect, with atmospheres of strong expressiveness. His light is serene and peaceful, reflecting a mood of love for nature, a nature largely in the United States of the time virgin and paradisiacal, yet to be explored. It is a transcendent light, of spiritual significance, whose radiance conveys a message of communion with nature. Although they use a classical structure and composition, the treatment of light is original because of the infinity of subtle variations in tonality, achieved through a meticulous study of the natural environment of their country. According to Barbara Novak, Luminism is a more serene form of the romantic aesthetic concept of the sublime, which had its translation in the deep expanses of the North American landscape. Some historians differentiate between pure Luminism and Hudson River School landscape painting: in the former, the landscape – more centered in the New England area – is more peaceful, more anecdotal, with delicate tonal gradations characterized by a crystalline light that seems to emanate from the canvas, in neat brushstrokes that seem to recreate the surface of a mirror and in compositions in which the excess of detail is unreal due to its straightness and geometrism, resulting in an idealization of nature. Thus understood, Luminism would encompass Heade, Lane, Kensett, Suydam and Silva. Hudson River landscape painting, on the other hand, would have a more cosmic vision and a predilection for a wilder and more grandiloquent nature, with more dramatic visual effects, as seen in the work of Cole, Durand, Church, Bierstadt, Cropsey and Bingham. It must be said, however, that neither group ever accepted these labels. Thomas Cole was the pioneer of the school. English by birth, one of his main references was Claude Lorrain. Settled in New York in 1825, he began to paint landscapes of the Hudson River area, with the aim of achieving "an elevated style of landscape" in which the moral message was equivalent to that of history painting. He also painted biblical subjects, in which light has a symbolic component, as in his Expulsion from the Garden of Eden (1828, Museum of Fine Arts, Boston). Durand was a little older than Cole and, after Cole's premature death, was considered the best American landscape painter of his time. An engraver by trade, from 1837 he turned to natural landscape painting, with a more intimate and picturesque vision of nature than Cole's allegorical one. Church was Cole's first disciple, who transmitted to him his vision of a majestic and exuberant nature, which he reflected in his scenes of the American West and the South American tropics. Bierstadt, of German origin, was influenced by Turner, whose atmospheric effects are seen in works such as In the Sierra Nevada Mountains in California (1868, Smithsonian American Art Museum, Washington D. C.), a lake between mountains seen after a storm, with the sun's rays breaking through the clouds. Heade was devoted to country landscapes of Massachusetts, Rhode Island and New Jersey, in meadows of endless horizons with clear or cloudy skies and lights of various times of day, sometimes refracted by humid atmospheres. Fitz Henry Lane is considered the greatest exponent of luminism. Handicapped since childhood by polio, he focused on the landscape of his native Gloucester (Massachusetts), with works that denote the influence of the English seascape painter Robert Salmon, in which light has a special role, a placid light that gives a sense of eternity, of time stopped in a serene perfection and harmony. Suydam focused on the coastal landscapes of New York and Rhode Island, in which he was able to reflect the light effects of the Atlantic coast. Kensett was influenced by Constable and devoted himself to the New England landscape with a special focus on the luminous reflections of the sky and the sea. Silva also excelled in the seascape, a genre in which he masterfully captured the subtle gradations of light in the coastal atmosphere. Cropsey combined the panoramic effect of the Hudson River School with the more serene luminism of Lane and Heade, with a meticulous and somewhat theatrical style. Bingham masterfully captured in his scenes of the Far West the limpid and clear light of dawn, his favorite when recreating scenes with American Indians and pioneers of the conquest of the West. Winslow Homer, considered the best American painter of the second half of the 19th century, who excelled in both oil and watercolor and in both landscape and popular scenes of American society, deserves special mention. One of his favorite genres was the seascape, in which he displayed a great interest in atmospheric effects and the changing lights of the day. His painting Moonlight. Wood Island Lighthouse (1894, Museum of Modern Art, New York) was painted entirely by moonlight, in five hours of work. Another important school was Belgian Luminism. In Belgium, the influence of French Impressionism was strongly felt, initially in the work of the group called Les Vingt, as well as in the School of Tervueren, a group of landscape painters who already showed their interest in light, especially in the atmospheric effects, as can be seen in the work of Isidore Verheyden. Later, Pointillism was the main influence on Belgian artists of the time, a trend embraced by Émile Claus and Théo van Rysselberghe, the main representatives of Belgian Luminism. Claus adopted Impressionist techniques, although he maintained academic drawing as the basis for his compositions, and in his work – mainly landscapes – he showed great interest in the study of the effects of light in different atmospheric conditions, with a style that sometimes recalls Monet. Rysselberghe was influenced by Manet, Degas, and Whistler, as well as by the Baroque painter Frans Hals and Spanish painting. His technique was of loose and vigorous brushwork, with great luminous contrasts. A luminist school also emerged in the Netherlands, more closely linked to the incipient Fauvism, in which Jan Toorop, Leo Gestel, Jan Sluyters, and the early work of Piet Mondrian stood out. Toorop was an eclectic artist, who combined different styles in the search for his own language, such as symbolism, modernism, pointillism, Gauguinian synthetism, Beardsley's linearism, and Japanese printmaking. He was especially devoted to allegorical and symbolic themes and, since 1905, to religious themes. In Germany, Max Liebermann received an initial realist influence – mainly from Millet – and a slight impressionist inclination towards 1890, until he ended up in a luminism of personal inspiration, with violent brushstrokes and brilliant light, a light of his own research with which he experimented until his death in 1935. In Spain, luminism developed especially in Valencia and Catalonia. The main representative of the Valencian school was Joaquín Sorolla, although the work of Ignacio Pinazo, Teodoro Andreu, Vicente Castell and Francisco Benítez Mellado is also noteworthy. Sorolla was a master at capturing the light in nature, as is evident in his seascapes, painted with a gradual palette of colors and a variable brushstroke, wider for specific shapes and smaller to capture the different effects of light. An interpreter of the Mediterranean sun like no other, a French critic said of him that "never has a paintbrush contained so much sun". After a period of training, in the 1890s he began to consolidate his style, based on a genre theme with a technique of rapid execution, preferably outdoors, with a thick brushstroke, energetic and impulsive, and with a constant concern for the capture of light, on which he did not cease to investigate its more subtle effects. La vuelta de la pesca (1895) is the first work that shows a particular interest in the study of light, especially in its reverberation in the water and in the sails moved by the wind. It was followed by Pescadores valencianos (1895), Cosiendo la vela (1896) and Comiendo en la barca (1898). In 1900 he visited with Aureliano de Beruete the Universal Exhibition in Paris, where he was fascinated by the intense chromatism of the Nordic artists, such as Anders Zorn, Max Liebermann or Peder Severin Krøyer; From here he intensified his coloring and, especially, his luminosity, with a light that invaded the whole painting, emphasizing the blinding whites, as in Jávea (1900), Idilio (1900), Playa de Valencia (1902), in two versions, morning and sunset, Evening Sun (1903), The Three Sails (1903), Children at the Seashore (1903), Fisherman (1904), Summer (1904), The White Boat (1905), Bathing in Jávea (1905), etc. They are preferably seascape, with a warm Mediterranean light of which he feels special predilection for that of the month of September, more golden. From 1906 he lowered the intensity of his palette, with a more nuanced tonality and a predilection for mauve ink; he continued with the seascapes, but increased the production of other types of landscapes, as well as gardens and portraits. He summered in Biarritz and the pale and soft light of the Atlantic Ocean made him lower the luminosity of his works. He also continues with his Valencian scenes: Paseo a orillas del mar (1909), Después del baño (1909). Between 1909 and 1910 his stays in Andalusia induced him to blur the contours, with a technique close to pointillism, with a predominance of white, pink, and mauve. Among his last works is La bata rosa (1916), in which he unleashes an abundance of light that filters through all parts of the canvas, highlighting the use of light and color on the treatment of the contours, which appear blurred. The Luminist School of Sitges emerged in Catalonia, active in this town in the Garraf between 1878 and 1892. Its most prominent members were Arcadi Mas i Fondevila, Joaquim de Miró, Joan Batlle i Amell, Antoni Almirall and Joan Roig i Soler. Opposed in a certain way to the Olot School, whose painters treated the landscape of the interior of Catalonia with a softer and more filtered light, the Sitgetan artists opted for the warm and vibrant Mediterranean light and the atmospheric effects of the Garraf coast. Heirs to a large extent of Fortuny, the members of this school sought to faithfully reflect the luminous effects of the surrounding landscape, in harmonious compositions that combined verism and a certain poetic and idealized vision of nature, with a subtle chromaticism and a fluid brushstroke that was sometimes described as impressionist. The Sitges School is generally considered a precursor of Catalan modernism: two of its main representatives, Ramon Casas and Santiago Rusiñol, spent several seasons in the town of Sitges, where they adopted the custom of painting d'après nature and assumed as the protagonist of their works the luminosity of the environment that surrounded them, although with other formal and compositional solutions in which the influence of French painting is evident. Casas studied in Paris, where he was trained in impressionism, with special influence of Degas and Whistler. His technique stands out for the synthetic brushstroke and the somewhat blurred line, with a theme focused preferably on interiors and outdoor images, as well as popular scenes and social vindication. Rusiñol showed a special sensitivity for the capture of light especially in his landscapes and his series of Gardens of Spain – he especially loved the gardens of Mallorca (the sones) and Granada – in which he developed a great ability for the effects of light filtered between the branches of the trees, creating unique environments where light and shadow play capriciously. Likewise, Rusiñol's light shows the longing for the past, for the time that flees, for the instant frozen in time whose memory will live on in the artist's work. From the 1880s until the turn of the century, symbolism was a fantastic and dreamlike style that emerged as a reaction to the naturalism of the realist and impressionist currents, placing special emphasis on the world of dreams, as well as on satanic and terrifying aspects, sex and perversion. A main characteristic of symbolism was aestheticism, a reaction to the prevailing utilitarianism of the time and to the ugliness and materialism of the industrial era. Symbolism gave art and beauty an autonomy of their own, synthesized in Théophile Gautier's formula "art for art's sake" (L'art pour l'art). This current was also linked to modernism (also known as Art Nouveau in France, Modern Style in the United Kingdom, Jugendstil in Germany, Sezession in Austria or Liberty in Italy). Symbolism was an anti-scientific and anti-naturalist movement, so light lost objectivity and was used as a symbolic element, in conjunction with the rest of the visual and iconographic resources of this style. It is a transcendent light, which behind the material world suggests a spirituality, whether religious or pantheistic, or perhaps simply a state of mind of the artist, a feeling, an emotion. Light, by its dematerialization, exerted a powerful influence on these artists, a light far removed from the physical world in its conception, although for its execution they often made use of impressionist and pointillist techniques. The movement originated in France with figures such as Gustave Moreau, Odilon Redon and Pierre Puvis de Chavannes. Moreau was still trained in romanticism under the influence of his teacher, Théodore Chassériau, but evolved a personal style in both subject matter and technique, with mystical images with a strong component of sensuality, a resplendent chromaticism with an enamel-like finish and the use of a chiaroscuro of golden shadows. Redon developed a fantastic and dreamlike theme, influenced by the literature of Edgar Allan Poe, which largely preceded surrealism. Until the age of fifty he worked almost exclusively in charcoal drawing and lithography, although he later became an excellent colorist, both in oil and pastel. Puvis de Chavannes was an outstanding muralist, a procedure that suited him well to develop his preference for cold tones, which gave the appearance of fresco painting. His style was more serene and harmonious, with an allegorical theme evoking an idealized past, simple forms, rhythmic lines and a subjective coloring, far from naturalism. In France there was also the movement of the Nabis ("prophets" in Hebrew), formed by Paul Sérusier, Édouard Vuillard, Pierre Bonnard, Maurice Denis and Félix Vallotton. This group was influenced by Gauguin's rhythmic scheme and stood out for an intense chromatism of strong expressiveness. Another focus of symbolism was Belgium, where the work of Félicien Rops, Fernand Khnopff and William Degouve de Nuncques should be noted. The first was a painter and graphic artist of great imagination, with a predilection for a theme centered on perversity and eroticism. Khnopff developed a dreamlike-allegorical theme of women transformed into angels or sphinxes, with disturbing atmospheres of great technical refinement. Degouve de Nuncques elaborated urban landscapes with a preference for nocturnal settings, with a dreamlike component precursor of surrealism: his work The Blind House (1892, Kröller-Müller Museum, Otterlo) influenced René Magritte's The Empire of Lights (1954, Royal Museums of Fine Arts of Belgium, Brussels). In Central Europe, the Swiss Arnold Böcklin and Ferdinand Hodler and the Austrian Gustav Klimt stood out. Böcklin specialized in a theme of fantastic beings, such as nymphs, satyrs, tritons or naiads, with a somber and somewhat morbid style, such as his painting The Island of the Dead (1880, Metropolitan Museum of Art, New York), where a pale, cold and whitish light envelops the atmosphere of the island where Charon's boat is headed. Hodler evolved from a certain naturalism to a personal style he called "parallelism", characterized by rhythmic schemes in which line, form and color are reproduced in a repetitive way, with simplified and monumental figures. It was in his landscapes that he showed the greatest luminosity, with pure and vibrant coloring. Klimt had an academic training, to lead to a personal style that synthesized impressionism, modernism and symbolism. He had a preference for mural painting, with an allegorical theme with a tendency towards eroticism, and with a decorative style populated with arabesques, butterfly wings or peacocks, and with a taste for the golden color that gave his works an intense luminosity. In Italy, it is worth mentioning Giuseppe Pellizza da Volpedo, formed in the divisionist environment, but who evolved to a personal style marked by an intense and vibrant light, whose starting point is his work Lost Hopes (1894, Ponti-Grün collection, Rome). In The Rising Sun or the Sun (1903-1904, National Gallery of Modern Art, Rome) he carried out a prodigious exercise in the exaltation of light, a refulgent dawn light that peeks over a mountainous horizon and seems to burst into a myriad of rays that spread in all directions, dazzling the viewer. A symbolic reading can be established for this work, given the social and political commitment of the artist, since the rising sun was taken by socialism as a metaphor for the new society to which this ideology aspired. In the Scandinavian sphere, it is worth remembering the Norwegian Christian Krohg and the Danish Vilhelm Hammershøi and Jens Ferdinand Willumsen. The former combined natural and artificial lights, often with theatrical effects and certain unreal connotations, as in The Sleeping Seamstress (1885, Nasjonalgalleriet, Oslo), where the double presence of a lamp next to a window through which daylight enters provokes a sensation of timelessness, of temporal indefinition. Hammershøi was a virtuoso in the handling of light, which he considered the main protagonist of his works. Most of his paintings were set in interior spaces with lights filtered through doors or windows, with figures generally with their backs turned. Willumsen developed a personal style based on the influence of Gauguin, with a taste for bright colors, as in After the Storm (1905, Nasjonalgalleriet, Oslo), a navy with a dazzling sun that seems to explode in the sky. Finally, it is worth mentioning a phenomenon between the 19th and 20th centuries that was a precedent for avant-garde art, especially in terms of its anti-academic component: naïf art ("naïve" in French), a term applied to a series of self-taught painters who developed a spontaneous style, alien to the technical and aesthetic principles of traditional painting, sometimes labeled as childish or primitive. One of its best representatives was Henri Rousseau, a customs officer by trade, who produced a personal work, with a poetic tone and a taste for the exotic, in which he lost interest in perspective and resorted to unreal-looking lighting, without shadows or perceptible light sources, a type of image that influenced artists such as Picasso or Kandinski and movements such as metaphysical painting and surrealism. 20th Century The art of the 20th century underwent a profound transformation: in a more materialistic, more consumerist society, art was directed to the senses, not to the intellect. The avant-garde movements arose, which sought to integrate art into society through a greater interrelation between artist and spectator, since it was the latter who interpreted the work, and could discover meanings that the artist did not even know. Avant-gardism rejected the traditional methods of optical representation – Renaissance perspective – to vindicate the two-dimensionality of painting and the autonomous character of the image, which implied the abandonment of space and light contrasts. In their place, light and shadow would no longer be instruments of a technique of spatial representation, but integral parts of the image, of the conception of the work as a homogeneous whole. On the other hand, other artistic methods such as photography, film and video had a notable influence on the art of this century, as well as, in relation to light, the installation, one of the variants of which is light art. On the other hand, the new interrelationship with the spectator means that the artist does not reflect what he sees, but lets the spectator see his vision of reality, which will be interpreted individually by each person. Advances in artificial light (carbon and tungsten filaments, neon lights) led society in general to a new sensitivity to luminous impacts and, for artists in particular, to a new reflection on the technical and aesthetic properties of the new technological advances. Many artists of the new century experimented with all kinds of lights and their interrelation, such as the mixture and interweaving of natural and artificial lights, the control of the focal point, the dense atmospheres, the shaded or transparent colors and other types of sensorial experiences, already initiated by the impressionists but which in the new century acquired a category of their own. Avant-garde The emergence of the avant-garde at the turn of the century brought a rapid succession of artistic movements, each with a particular technique and a particular vision of the function of light and color in painting: fauvism and expressionism were heirs of post-impressionism and treated light to the maximum of its saturation, with strong chromatic contrasts and the use of complementary colors for shadows; cubism, futurism and surrealism had in common a subjective use of color, giving primacy to the expression of the artist over the objectivity of the image. One of the first movements of the 20th century concerned with light and, especially, color, was Fauvism (1904-1908). This style involved experimentation in the field of color, which was conceived in a subjective and personal way, applying emotional and expressive values to it, independent of nature. For these artists, colors had to generate emotions, through a subjective chromatic range and brilliant workmanship. In this movement a new conception of pictorial illumination arose, which consisted in the negation of shadows; the light comes from the colors themselves, which acquire an intense and radiant luminosity, whose contrast is achieved through the variety of pigments used. Fauvist painters include Henri Matisse, Albert Marquet, Raoul Dufy, André Derain, Maurice de Vlaminck and Kees van Dongen. Perhaps the most gifted was Matisse, who "discovered" light in Collioure, where he understood that intense light eliminates shadows and highlights the purity of colors; from then on he used pure colors, to which he gave an intense luminosity. According to Matisse, "color contributes to expressing light, not its physical phenomenon but the only light that exists in fact, that of the artist's brain". One of his best works is Luxury, Calm and Voluptuousness (1904, Musée d'Orsay, Paris), a scene of bathers on the beach illuminated by intense sunlight, in a pointillist technique of juxtaposed patches of pure and complementary colors. Related to this style was Pierre Bonnard, who had been a member of the Nabis, an intimist painter with a predilection for the female nude, as in his Nude against the light (1908, Royal Museums of Fine Arts of Belgium, Brussels), in which the woman's body is elaborated with light, enclosed in a space formed by the vibrant light of a window sifted by a blind. Expressionism emerged as a reaction to impressionism, against which they defended a more personal and intuitive art, where the artist's inner vision – the "expression" – prevailed over the representation of reality – the "impression". In their works they reflected a personal and intimate theme with a taste for the fantastic, deforming reality to accentuate the expressive character of the work. Expressionism was an eclectic movement, with multiple tendencies in its midst and a diverse variety of influences, from post-impressionism and symbolism to fauvism and cubism, as well as some aniconic tendencies that would lead to abstract art (Kandinski). Expressionist light is more conceptual than sensorial, it is a light that emerges from within and expresses the artist's mentality, his consciousness, his way of seeing the world, his subjective "expression". With precedents in the figures of Edvard Munch and James Ensor, it was formed mainly around two groups: Die Brücke (Ernst Ludwig Kirchner, Erich Heckel, Karl Schmidt-Rottluff, Emil Nolde) and Der Blaue Reiter (Vasili Kandinski, Franz Marc, August Macke, Paul Klee). Other exponents were the Vienna Group (Egon Schiele, Oskar Kokoschka) and the School of Paris (Amedeo Modigliani, Marc Chagall, Georges Rouault, Chaïm Soutine). Edvard Munch was linked in his beginnings to symbolism, but his early work already reflects a certain existential anguish that will lead him to a personal painting of strong psychological introspection, in which light is a reflection of the emptiness of existence, of the lack of communication and of the subordination of physical reality to the artist's inner vision, as can be seen in the faces of his characters, with a spectral lighting that gives them the appearance of automatons. The members of Die Brücke ("The Bridge") – especially Kirchner, Heckel and Schmidt-Rottluff – developed a dark, introspective and anguished subject matter, where form, color and light are subjective, resulting in tense, unsettling works that emphasize the loneliness and rootlessness of the human being. The light in these artists is not illuminating, it does not respond to physical criteria, as can be seen in Erich Heckel and Otto Müller playing Kirchner's chess (1913, Brücke Museum Berlin), where the lamp on the table does not radiate light and constitutes a strange object, alien to the scene. Der Blaue Reiter ("The Blue Rider") emerged in Munich in 1911 and more than a common stylistic stamp shared a certain vision of art, in which the creative freedom of the artist and the personal and subjective expression of his works prevailed. It was a more spiritual and abstract movement, with a technical predilection for watercolor, which gave his works an intense chromatism and luminosity. Cubism (1907-1914) was based on the deformation of reality by destroying the spatial perspective of Renaissance origin, organizing space according to a geometric grid, with simultaneous vision of objects, a range of cold and muted colors, and a new conception of the work of art, with the introduction of collage. It was the first movement that dissociated light from reality, by eliminating the tangible focus that in all the previous history of painting illuminated the pictures, whether natural or artificial; in its place, each part of the picture, each space that has been deconstructed into geometric planes, has its own luminosity. Jean Metzinger, in On Cubism (1912), wrote that "beams of light and shadows distributed in such a way that one engenders the other plastically justify the ruptures whose orientation creates the rhythm". The main figure of this movement was Pablo Picasso, one of the great geniuses of the 20th century, along with Georges Braque, Jean Metzinger, Albert Gleizes, Juan Gris, and Fernand Léger. Before ending up in cubism, Picasso went through the so-called blue and rose periods: in the first one, the influence of El Greco can be seen in his elongated figures of dramatic appearance, with profiles highlighted by a yellowish or greenish light and shadows of thick black brushstrokes; in the second one, he deals with kinder and more human themes, being characteristic the scenes of figures immersed in empty landscapes of luminous appearance. His cubist stage is divided into two phases: in "analytical cubism" he focused on portraits and still lifes, with images broken down into planes in which light loses its modeling and volume-defining character to become a constructive element that emphasizes contrast, giving the image an iridescent appearance; in "synthetic cubism" he expanded the chromatic range and included extra-pictorial elements, such as texts and fragments of literary works. After his cubist stage, his most famous work is Guernica, entirely elaborated in shades of gray, a night scene illuminated by the lights of a light bulb in the ceiling – shaped like a sun and an eye at the same time – and of a quinque in the hands of the character leaning out of the window, with a light constructed by planes that serve as counterpoints of light in the midst of darkness. A movement derived from Cubism was Orphism, represented especially by Robert Delaunay, who experimented with light and color in his abstracting search for rhythm and movement, as in his series on the Eiffel Tower or in Field of Mars. The Red Tower, where he decomposes light into the colors of the prism to diffuse it through the space of the painting. Delaunay studied optics and came to the conclusion that "the fragmentation of form by light creates planes of colors", so in his work he explored with intensity the rhythms of colors, a style he called "simultaneism" taking the scientific concept of simultaneous contrasts created by Chevreul. For Delaunay, "painting is, properly speaking, a luminous language", which led him in his artistic evolution towards abstraction, as in his series of Windows, Disks and Circular and Cosmic Forms, in which he represents beams of light elaborated with bright colors in an ideal space. Another style concerned with optical experimentation was Futurism (1909–1930), an Italian movement that exalted the values of the technical and industrial progress of the 20th century and emphasized aspects of reality such as movement, speed and simultaneity of action. Prominent among its ranks were Giacomo Balla, Gino Severini, Carlo Carrà and Umberto Boccioni. These artists were the first to treat light in an almost abstract way, as in Boccioni's paintings, which were based on pointillist technique and the optical theories of color to carry out a study of the abstract effects of light, as in his work The City Rises (1910-1911, Museum of Modern Art, New York). Boccioni declared in 1910 that "movement and light destroy the matter of objects" and aimed to "represent not the optical or analytical impression, but the psychic and total experience". Gino Severini evolved from a still pointillist technique towards Cubist spatial fragmentation applied to Futurist themes, as in his Expansión de la luz (1912, Museo Thyssen-Bornemisza, Madrid), where the fragmentation of color planes contributes to the construction of plastic rhythms, which enhances the sensation of movement and speed. Carlo Carrà elaborated works of pointillist technique in which he experimented with light and movement, as in La salida del teatro (1909, private collection), where he shows a series of pedestrians barely sketched in their elemental forms and elaborated with lines of light and color, while in the street artificial lights gleam, whose flashes seem to cut the air. Balla synthesized neo-Impressionist chromaticism, pointillist technique and cubist structural analysis in his works, decomposing light to achieve his desired effects of movement. In La jornada del operario (1904, private collection), he divided the work into three scenes separated by frames, two on the left and one on the right of double size. They represent dawn, noon and twilight, in which he depicts various phases of the construction of a building, consigning a day's work; the two parts on the left are actually a single image separated by the frame, but with a different treatment of light for the time of day. In Arc Lamp (1911-1912, Museum of Modern Art, New York) he made an analytical study of the patterns and colors of a beam of light, an artificial light in conflict with moonlight, in a symbolism in which the electric light represents the energy of youth as opposed to the lunar light of classicism and romanticism. In this work the light seems to be observed under a microscope, from the incandescent center of the lamp sprouts a series of colored arrows that gradually lose chromatism as they move away from the bright focus until they merge with the darkness. Balla himself stated that "the splendor of light is obtained by bringing pure colors closer together. This painting is not only original as a work of art, but also scientific, since I sought to represent light by separating the colors that compose it". Outside Italy, Futurism influenced various parallel movements such as English Vorticism, whose best exponent was Christopher Richard Wynne Nevinson, a painter who showed a sensitivity for luminous effects reminiscent of Severini, as seen in his Starry Shell (1916, Tate Gallery, London); or Russian Rayonism, represented by Mikhail Larionov and Natalia Goncharova, a style that combined the interest in light beams typical of analytical cubism with the radiant dynamism of futurism, although it later evolved towards abstraction. In Italy also emerged the so-called metaphysical painting, considered a forerunner of surrealism, represented mainly by Giorgio de Chirico and Carlo Carrà. Initially influenced by symbolism, De Chirico was the creator of a style opposed to futurism, more serene and static, with certain reminiscences of classical Greco-Roman art and Renaissance linear perspective. In his works he created a world of intellectual placidity, a dreamlike space where reality is transformed for the sake of a transcendent evocation, with spaces of wide perspectives populated by figures and isolated objects in which a diaphanous and uniform illumination creates elongated shadows of unreal aspect, creating an overwhelming sensation of loneliness. In his urban spaces, empty and geometrized, populated by faceless mannequins, the lights and shadows create strong contrasts that help to enhance the dreamlike factor of the image. Another artist of this movement is Giorgio Morandi, author of still lifes in which chiaroscuro has a clear protagonism, in compositions where light and shadow play a primordial role to build an unreal and dreamlike atmosphere. With abstract art (1910-1932) the artist no longer tries to reflect reality, but his inner world, to express his feelings. The art loses all real aspect and imitation of nature to focus on the simple expressiveness of the artist, in shapes and colors that lack any referential component. Initiated by Vasili Kandinski, it was developed by the neoplasticist movement (De Stijl), with figures such as Piet Mondrian and Theo Van Doesburg, as well as Russian Suprematism (Kazimir Malevich). The presence of light in abstract art is inherent to its evolution, because although this movement dispenses with the theme in his works, it is no less true that it is part of this, after all, the human being cannot detach himself completely from the reality that shapes his existence. The path towards abstraction came from two paths: one of a psychic-emotive character originated by symbolism and expressionism, and the other objective-optical derived from fauvism and cubism. Light played a special role in the second one, since starting from the cubist light beams it was logical to reach the isolation of them outside the reality that originates them and their consequent expression in abstract forms. In abstract art, light loses the prominence it has in an image based on natural reality, but its presence is still perceived in the various tonal gradations and chiaroscuro games that appear in numerous works by abstract artists such as Mark Rothko, whose images of intense chromaticism have a luminosity that seems to radiate from the color of the work itself. The pioneer of abstraction, Vasili Kandinski, received the inspiration for this type of work when he woke up one day and saw one of his paintings in which the sunlight was shining brightly, diluting the forms and accentuating the chromaticism, which showed an unprecedented brightness; he then began a process of experimentation to find the perfect chromatic harmony, giving total freedom to color without any formal or thematic subordination. Kandinski's research continued with Russian suprematism, especially with Kazimir Malevich, an artist with post-impressionist and fauvist roots who later adopted cubism, leading to a geometric abstraction in which color acquires special relevance, as shown in his Black on Black (1913) and White on White (1919). In the interwar period, the New Objectivity (Neue Sachlichkeit) movement emerged in Germany, which returned to realistic figuration and the objective representation of the surrounding reality, with a marked social and vindictive component. Although they advocated realism, they did not renounce the technical and aesthetic achievements of avant-garde art, such as Fauvist and expressionist coloring, Futurist "simultaneous vision" or the application of photomontage to painting. In this movement, the urban landscape, populated with artificial lights, played a special role. Among its main representatives were Otto Dix, George Grosz, and Max Beckmann. Surrealism (1924-1955) placed special emphasis on imagination, fantasy and the world of dreams, with a strong influence of psychoanalysis. Surrealist painting moved between figuration (Salvador Dalí, Paul Delvaux, René Magritte, Max Ernst) and abstraction (Joan Miró, André Masson, Yves Tanguy, Paul Klee). René Magritte treated light as a special object of research, as is evident in his work The Empire of Lights (1954, Royal Museums of Fine Arts of Belgium, Brussels), where he presents an urban landscape with a house surrounded by trees in the lower part of the painting, immersed in a nocturnal darkness, and a daytime sky furrowed with clouds in the upper part; in front of the house there is a street lamp whose light, together with that of two windows on the upper floor of the house, is reflected in a pond located at the foot of the house. The contrasting day and night represent waking and sleeping, two worlds that never come to coexist. Dalí evolved from a formative phase in which he tried different styles (impressionism, pointillism, futurism, cubism, fauvism) to a figurative surrealism strongly influenced by Freudian psychology. In his work he showed a special interest in light, a Mediterranean light that in many of his works bathes the scene with intensity: The Bay of Cadaqués (1921, private collection), The Phantom Chariot (1933, Nahmad collection, Geneva), Solar Table (1936, Boijmans Van Beuningen Museum, Rotterdam), Composition (1942, Tel Aviv Museum of Art). It is the light of his native Empordà, a region marked by the tramuntana wind, which, according to Josep Pla, generates a "static, clear, shining, sharp, glittering" light. Dalí's treatment of light is generally surprising, with singular fantastic effects, contrasts of light and shadow, backlighting and countershadows, always in continuous research of new and surprising effects. Towards 1948 he abandoned avant-gardism and returned to classicist painting, although interpreted in a personal and subjective way, in which he continues his incessant search for new pictorial effects, as in his "atomic stage" in which he seeks to capture reality through the principles of quantum physics. Among his last works stand out for their luminosity: Christ of Saint John of the Cross (1951, Kelvingrove Museum, Glasgow), The Last Supper (1955, National Gallery of Art, Washington D. C.), The Perpignan Station (1965, Museum Ludwig, Cologne) and Cosmic Athlete (1968, Zarzuela Palace, Madrid). Joan Miró reflected in his works a light of magical and at the same time telluric aspect, rooted in the landscape of the countryside of Tarragona that was so dear to him, as is evident in La masía (1921-1922, National Gallery of Art, Washington D. C.), illuminated by a twilight that bathes the objects in contrast with the incipient darkness of the sky. In his work he uses flat and dense colors, in preferably nocturnal environments with special prominence of empty space, while objects and figures seem bathed in an unreal light, a light that seems to come from the stars, for which he felt a special devotion. In the United States, between the 1920s and 1930s, several figurative movements emerged, especially interested in everyday reality and life in cities, always associated with modern life and technological advances, including artificial lights in streets and avenues as well as commercial and indoor lights. The first of these movements was the Ashcan School, whose leader was Robert Henri, and where George Wesley Bellows and John French Sloan also stood out. In opposition to American Impressionism, these artists developed a style of cold tones and dark palette, with a theme centered on marginalization and the world of nightlife. This school was followed by the so-called American realism or American Scene, whose main representative was Edward Hopper, a painter concerned with the expressive power of light, in urban images of anonymous and lonely characters framed in lights and deep shadows, with a palette of cold colors influenced by the luminosity of Vermeer. Hopper took from black and white cinema the contrast between light and shadow, which would be one of the keys to his work. He had a special predilection for the light of Cape Cod (Massachusetts), his summer resort, as can be seen in Sunlight on the Second Floor (1960, Whitney Museum of American Art, New York). His scenes are notable for their unusual perspectives, strong chromaticism and contrasts of light, in which metallic and electrifying glows stand out. In New York Cinema (1939, Museum of Modern Art, New York) he showed the interior of a cinema vaguely illuminated by – as he himself expressed in his notebook – "four sources of light, with the brightest point in the girl's hair and in the flash of the handrail". On one occasion, Hopper went so far as to state that the purpose of his painting was none other than to "paint sunlight on the side wall of a house." One critic defined the light in Hopper's mysterious paintings as a light that "illuminates but never warms," a light at the service of his vision of the desolate American urban landscape. Latest trends Since the Second World War, art has undergone a vertiginous evolutionary dynamic, with styles and movements following each other more and more rapidly in time. The modern project originated with the historical avant-gardes reached its culmination with various anti-material styles that emphasized the intellectual origin of art over its material realization, such as action art and conceptual art. Once this level of analytical prospection of art was reached, the inverse effect was produced – as is usual in the history of art, where different styles confront and oppose each other, the rigor of some succeeding the excess of others, and vice versa – and a return was made to the classical forms of art, accepting its material and esthetic component, and renouncing its revolutionary and society-transforming character. Thus postmodern art emerged, where the artist shamelessly transits between different techniques and styles, without a vindictive character, and returns to artisanal work as the essence of the artist. The first movements after the war were abstract, such as American abstract expressionism and European informalism (1945-1960), a set of trends based on the expressiveness of the artist, who renounces any rational aspect of art (structure, composition, preconceived application of color). It is an eminently abstract art, where the material support of the work becomes relevant, which assumes the leading role over any theme or composition. Abstract expressionism – also called action painting – was characterized by the use of the dripping technique, the dripping of paint on the canvas, on which the artist intervened with various tools or with his own body. Among its members, Jackson Pollock and Mark Rothko stand out. In addition to pigments, Pollock used glitter and aluminum enamel, which stands out for its brightness, giving his works a metallic light and creating a kind of chiaroscuro. For his part, Rothko worked in oil, with overlapping layers of very fluid paint, which created glazes and transparencies. He was especially interested in color, which he combined in an unprecedented way, but with a great sense of balance and harmony, and used white as a base to create luminosity. European informalism includes various currents such as tachism, art brut and matter painting. Georges Mathieu, Hans Hartung, Jean Fautrier, Jean Dubuffet, Lucio Fontana and Antoni Tàpies stand out. The latter developed a personal and innovative style, with a mixed technique of crushed marble powder with pigments, which he applied on the canvas to later carry out various interventions by means of grattage. He used to use a dark coloring, almost "dirty", but in some of his works (such as Zoom, 1946), he added a white from Spain that gave it a great luminosity. Among the last movements especially concerned with light and color was op-art (optical art, also called kinetic or kinetic-luminescent), a style that emphasized the visual aspect of art, especially optical effects, which were produced either by optical illusions (ambiguous figures, persistent images, moiré effect), or by movement or play of light. Victor Vasarely, Jesús Rafael Soto and Yaacov Agam stood out. The technique of these artists is mixed, transcending canvas or pigment to incorporate metallic pieces, plastics and all kinds of materials; in fact, more than the material substrate of the work, the artistic matter is light, space and movement. Vasarely had a very precise and elaborate way of working, sometimes using photographs that he projected onto the canvas by means of slides, which he called "photographisms". In some works (such as Eridan, 1956) he investigated with the contrasts between light and shadow, reaching high values of light achieved with white and yellow. His Cappella series (1964) focused on the opposition between light and dark combined with shapes. The Vega series (1967) was made with aluminum paint and gold and silver glitter, which reverberated the light. Soto carried out a type of serial painting influenced by dodecaphonism, with primary colors that stand out for their transparency and provoke a strong sensation of movement. Agam, on the other hand, was particularly interested in chromatic combinations, working with 150 different colors, in painting or sculpture-painting. Among the figurative trends is pop art (1955-1970), which emerged in the United States as a movement to reject abstract expressionism. It includes a series of authors who returned to figuration, with a marked component of popular inspiration, with images inspired by the world of advertising, photography, comics, and mass media. Roy Lichtenstein, Tom Wesselmann, James Rosenquist, and Andy Warhol stood out. Lichtenstein was particularly inspired by comics, with paintings that look like vignettes, sometimes with the typical graininess of printed comics. He used flat inks, without mixtures, in pure colors. He also produced landscapes, with light colors and great luminosity. Wesselmann specialized in nudes, generally in bathrooms, with a cold and aseptic appearance. He also used pure colors, without tonal gradations, with sharp contrasts. Rosenquist had a more surrealist vein, with a preference for consumerist and advertising themes. Warhol was the most mediatic and commercial artist of this group. He used to work in silkscreen, in series ranging from portraits of famous people such as Elvis Presley, Marilyn Monroe or Mao Tse-tung to all kinds of objects, such as his series of Campbell's soup cans, made with a garish and strident colorism and a pure, impersonal technique. Abstraction resurfaced between the 1960s and 1980s with Post-painterly abstraction and Minimalism. Post-painterly abstraction (also called "New Abstraction") focused on geometrism, with an austere, cold and impersonal language, due to an anti-anthropocentric tendency that could be glimpsed in these years in art and culture in general, also present in pop-art, a style with which it coexisted. Thus, post-pictorial abstraction focuses on form and color, without making any iconographic reading, only interested in the visual impact, without any reflection. They use striking colors, sometimes of a metallic or fluorescent nature. Barnett Newman, Frank Stella, Ellsworth Kelly and Kenneth Noland stand out. Minimalism was a trend that involved a process of dematerialization that would lead to conceptual art. They are works of marked simplicity, reduced to a minimum motif, refined to the initial approach of the author. Robert Mangold and Robert Ryman stand out, who had in common the preference for monochrome, with a refined technique in which the brushstroke is not noticed and the use of light tones, preferably pastel colors. Figuration returned again with hyperrealism – which emerged around 1965 – a trend characterized by its superlative and exaggerated vision of reality, which is captured with great accuracy in all its details, with an almost photographic aspect, in which Chuck Close, Richard Estes, Don Eddy, John Salt, and Ralph Goings stand out. These artists are concerned, among other things, with details such as glitter and reflections in cars and shop windows, as well as light effects, especially artificial city lights, in urban views with neon lights and the like. Linked to this movement is the Spaniard Antonio López García, author of academic works but where the most meticulous description of reality is combined with a vague unreal aspect close to magical realism. His urban landscapes of wide atmospheres stand out (Madrid sur, 1965–1985; Madrid desde Torres Blancas, 1976–1982), as well as images with an almost photographic aspect such as Mujer en la bañera (1968), in which a woman takes a bath in an atmosphere of electric light reflected on the bathroom tiles, creating an intense and vibrant composition. Another movement especially concerned with the effects of light has been neo-luminism, an American movement inspired by American luminism and the Hudson River School, from which they adopt its majestic skies and calm water marinas, as well as the atmospheric effects of light rendered in subtle gradations. Its main representatives are: James Doolin, April Gornik, Norman Lundin, Scott Cameron, Steven DaLuz and Pauline Ziegen. Since 1975, postmodern art has predominated in the international art scene: it emerged in opposition to the so-called modern art, it is the art of postmodernity, a socio-cultural theory that postulates the current validity of a historical period that would have surpassed the modern project, that is, the cultural, political and economic roots of the Contemporary Age, marked culturally by the Enlightenment, politically by the French Revolution and economically by the Industrial Revolution. These artists assume the failure of the avant-garde movements as the failure of the modern project: the avant-garde intended to eliminate the distance between art and life, to universalize art; the postmodern artist, on the other hand, is self-referential, art speaks of art, and does not intend to do social work. Postmodern painting returns to the traditional techniques and themes of art, although with a certain stylistic mixification, taking advantage of the resources of all the preceding artistic periods and intermingling and deconstructing them, in a procedure that has been baptized as "appropriationism" or artistic "nomadism". Individual artists such as Jeff Koons, David Salle, Jean-Michel Basquiat, Keith Haring, Julian Schnabel, Eric Fischl or Miquel Barceló stand out, as well as various movements such as the Italian trans-avant-garde (Sandro Chia, Francesco Clemente, Enzo Cucchi, Nicola De Maria, Mimmo Paladino), German Neo-Expressionism (Anselm Kiefer, Georg Baselitz, Jörg Immendorff, Markus Lüpertz, Sigmar Polke), Neo-Mannerism, free figuration, among others. See also Light art Light painting History of painting Periods in Western art history References Bibliography Painting Light Renaissance Modern art Light art Medieval art Luminism (American art style) Leonardo da Vinci Vincent van Gogh Jean-Michel Basquiat
Light in painting
Physics
51,825
77,961,692
https://en.wikipedia.org/wiki/Mucoromyceta
Mucoromyceta is a subkingdom of fungi which includes the divisions Calcarisporiellomycota, Glomeromycota, Mortierellomycota and Mucoromycota. This enormous group includes almost all molds. Description Molds in this group have a stalk which at the top has a caplike structure that includes the spores. References Further reading Subkingdoms Fungus taxa Taxa described in 2018
Mucoromyceta
Biology
93
2,733,263
https://en.wikipedia.org/wiki/CGMS-A
Copy Generation Management System – Analog (CGMS-A) is a copy protection mechanism for analog television signals. It consists of a waveform inserted into the non-picture vertical blanking interval (VBI) of an analogue video signal. If a compatible recording device (for example, a DVD recorder) detects this waveform, it may block or restrict recording of the video content. It is not the same as the broadcast flag, which is designed for use in digital television signals, although the concept is the same. There is a digital form of CGMS specified as CGMS-D which is required by the DTCP ("5C") protection standard. History CGMS-A has been in existence since 1995, and has been standardized by various organizations including the IEC and EIA/CEA. It is used in devices such as PVRs/DVRs, DVD players and recorders, D-VHS, and Blu-ray recorders, as well certain television broadcasts. More recent TiVo firmware releases comply with CGMS-A signals. Applications Implementation of CGMS-A is required for certain applications by DVD CCA license. D-VHS and some DVD recorders comply with CGMS-A signal on analog inputs. The technology requires minimal signal processing. Where the source signal is analogue (e.g. VHS, analogue broadcast), the CGMS-A signalling may be present in that source. Where the source signal is digital (e.g. DVD, digital broadcast), then the Copy Control Information (CCI) is carried in metadata in the digital transport or program stream, and a compliant hardware device (e.g. a DVD player) will read that data, and encode it into the analogue video signal generated within the device itself. There is no blanket legal requirement for devices which record video to detect or act upon the CGMS-A information. For example, the DMCA "does not require manufacturers of consumer electronics, telecommunications or computing equipment to design their products affirmatively to respond to any particular technological measure.". Standardization CGMS-A is standardized through the IEC, CEA, EIA-J and ETSI as follows: In all these standards, the CGMS-A information is only two out of many bits of information that are defined. On 60 Hz systems (commonly known as "NTSC"), the system is highly extensible, though beyond the CGMS-A bits, only the aspect ratio of the video signal and the analogue protection system (APS) bits are commonly used. The signalling is typically present on every video frame, but CEA-805-D states that "the transmission rate for any given packet type defined in CEA-805-D shall be at least once every three frames", meaning that in theory for two out of three frames, different header values can be used to send data not defined in the standard. Type A signalling (20 bits in total; the only type defined for 480i) offers some extensibility by re-using the 14 data bits via one of the 14 undefined values for the four header bits. Type B signalling (134 bits in total) already defines bits to carry an Active Format Description, Colorimetry, Redistribution Control, and a pixel-accurate definition of the location of any letterbox or pillarbox bars in the image, plus two bytes reserved for future use. Different header bit values may also be used for further extensibility. On 50 Hz systems (commonly, though incorrectly known as "PAL"), the bits that are widely used and interpreted as CGMS-A are not named as such, and are added at the end of an existing signalling standard originally created for the PALplus video format (but still in common use in Europe in standard PAL video) called Widescreen signaling. Contradicting Type A standards Some references quote EIA-J CPR1204-1 as the authoritative reference for CGMS-A on 480p60 (525p) systems, since this was the first published standard to mention CGMS-A on 480p. This EIA-J document does not define the meaning of the bits, only their timing on the analogue video signal. The 480p signalling is based on the existing 480i standard but with a double speed clock, and IEC-61880-2 formalises this by defining bit meanings which are the same as for 480i. However, CEA-805 re-defines the aspect ratio signalling bits. Hence 480p Type A line 40 CGMMS-A data generated in accordance with CEA-805 cannot signal the aspect ratio of the video image, and in this way is incompatible with the same data generated in accordance with IEC-61880-2, and is no longer a straight "double speed clock" version of the 480i standard. CEA-805 CGMS-A Type B confusion CEA-805 is now on its fourth major version (CEA-805-D), and there have been errata issued to at least one version. CEA-805-D recognises that, in respect of Type B signalling, earlier versions of the standard were unclear regarding the order of bits as represented in the analogue video signal vs as used for the CRC calculation, and also which bits were to be used for the CRC calculation. Issue D requires sink devices to perform multiple CRC calculations for Type B signalling, taking account of various possible implementations in source devices. There is no such confusion surrounding Type A signalling. Signalling CEA-608-B specifies meaning of the 7-bit field placed on the data lines. The bits 4 and 3 contain the CGMS-A values, the bits 2 and 1 contain the Analog Protection System (APS) value, the bit 0 is the Analog Source Bit (ASB) specifying if the signal originates from a pre-recorded material, bits 5 and 6 are reserved. CGMS-A is signalled by 2 bits in the vertical blanking interval (VBI) signal of analog television broadcasts according to the following matrix: *CopyNoMore was not a part of the original standard. The 0,1 value originally was "Reserved". Removal The signal itself can be easily stripped by normalizing the VBI, e.g. using a video stabilizer to counter the side effects from Macrovision's manipulation of the VBI. CGMS-A VBI data is commingled or generally near captioning signals, so removal of CGMS-A will likely remove captioning as well. The scheme can be made more robust by adding the Rights Assertion Mark (RAM); when the RAM is present but CGMS-A is not, copying is denied, turning the scheme into a permission-based one. The RAM can be encoded by using the VEIL technology. References External links Copy Generation Management System – Analog (PowerPoint) Microsoft VIDEOPARAMETERS structure for Windows GDI video connections Techdirt: Microsoft: It's Not The Broadcast Flag, It's A Different Flag Digital rights management standards Digital television High-definition television Television technology
CGMS-A
Technology
1,484
65,888,401
https://en.wikipedia.org/wiki/Accessory%20gene%20regulator
Accessory gene regulator (agr) is a complex 5 gene locus that is a global regulator of virulence in Staphylococcus aureus. It encodes a two-component transcriptional quorum-sensing (QS) system activated by an autoinducing, thiolactone-containing cyclic peptide (AIP). Agr occurs in 4 allelic subtypes that have an important role in staphylococcal evolution. The corresponding AIPs are mutually cross-inhibitory, which may enhance the evolutionary separation of the 4 groups. The agr receptor, AgrC, is a model histidine phosphokinase (HPK) that has been used to decipher the molecular mechanism of signal transduction. AIP binding to the extracellular domain of AgrC causes twisting of the intracellular a-helical domain so as to enable trans-phosphorylation of the active site histidine;  the inhibitory AIPs cause the α-helical domain to twist in the opposite direction, preventing trans-phosphorylation. The agr QS circuit autoactivates transcription of agrA which, in turn upregulates the phenol-souble modulins. More importantly, it activates transcription of a divergently oriented promoter whose transcript, known as RNAIII, is a 514 nt regulatory RNA that encodes δ-hemolysin and is the major effector of the agr regulon. RNAIII acts by antisense inhibition or activation of target gene translation. In vitro, early in growth, genes encoding surface proteins important for adhesion and immune evasion (such as spa – encoding proteinA) are expressed, enabling the organism to gain a foothold. Later in growth, these genes are down-regulated by RNAIII and those encoding toxins, hemolysins and other virulence-related proteins, are turned on, enabling the organism to establish and promulgate its pathological programs, such as abscess formation. It is assumed that this program operates in vivo as well. As agr is essential for staphylococcal contagion, agr-defective mutants are not contagious, but enable the organism's long-term survival in chronic conditions such as surgical implant infections, osteomyelitis or the infected lung in cystic fibrosis. In keeping with this behavior, mutations inactivating agr function enhance the stability of biofilms, which are key to the maintenance of chronic infections. Agr is widely conserved among Bacillota and has a well-defined role in virulence regulation in several genera, especially Listeria and Clostridia. References Gene expression Genetics Virology
Accessory gene regulator
Chemistry,Biology
574
28,214,810
https://en.wikipedia.org/wiki/Tango%20bundle
In algebraic geometry, a Tango bundle is one of the indecomposable vector bundles of rank n − 1 constructed on n-dimensional projective space Pn by References Algebraic geometry Vector bundles
Tango bundle
Mathematics
40
51,321,238
https://en.wikipedia.org/wiki/Pirsonia
Pirsonia is a non photosynthetic genus of heterokonts. It comprises the entirety of the family Pirsoniaceae, order Pirsoniida and class Pirsonea in the subphylum Bigyromonada, phylum Gyrista. Taxonomy Class Pirsonea Cavalier-Smith 2017 [Pirsoniomycetes] Order Pirsoniales Cavalier-Smith 1998 [Pirsoniida Cavalier-Smith & Chao 2006] Family Pirsoniaceae Cavalier-Smith 1998 Pirsonia Schnepf, Debres & Elbrachter 1990 P. diadema Kühn 1996 P. eucampiae Kühn 1996 P. formosa Kühn 1996 P. guinardie Schnepf, Debres & Elbrachter 1990 P. mucosa Kühn 1996 P. punctigerae P. verrucosa Kühn 1996 References External links Heterokont genera Heterokonts
Pirsonia
Biology
197
39,124,671
https://en.wikipedia.org/wiki/NK-92
The NK-92 cell line is an immortalised cell line that has the characteristics of a type of immune cell found in human blood called ’natural killer’ (NK) cells. Blood NK cells and NK-92 cells recognize and attack cancer cells as well as cells that have been infected with a virus, bacteria, or fungus. NK-92 cells were first isolated in 1992 in the laboratory of Hans Klingemann at the British Columbia Cancer Agency in Vancouver, Canada, from a patient who had a rare NK cell non-Hodgkin-lymphoma. These cells were subsequently developed into a continuously growing cell line. NK-92 cells are distinguished by their suitability for expansion to large numbers, ability to consistently kill cancer cells and testing in clinical trials. When NK-92 cells recognize a cancerous or infected cell, they secrete perforin that opens holes into the diseased cells and releases granzymes that kill the target cells. NK-92 cells are also capable of producing cytokines such as tumor necrosis factor alpha (TNF-a) and interferon gamma (IFN-y), which stimulates proliferation and activation of other immune cells. In clinical trials Several phase 1 clinical trials have been performed by experts in the field of adoptive immunotherapy of cancer. Hans Klingemann and Sally Arai completed a US trial at Rush University Medical Center (Chicago) in renal cell cancer and melanoma patients in 2008, and Torsten Tonn, MD and Oliver Ottmann, MD completed the European trial at the University of Frankfurt in patients with various solid and hematological malignancies in 2013. Armand Keating at Princess Margaret Hospital in Toronto conducted a trial in which NK-92 cells were given to patients who had relapsed after autologous bone marrow transplants for leukemia or lymphoma. In all clinical trials so far, NK-92 cells were administered as a simple intravenous infusion, dosed two or three times per treatment course, and given in the outpatient setting. Of the 39 patients enrolled across the three studies, 2 serious (grade 3–4) side-effects occurred during or after the infusion of NK-92 cells, the side effects disappeared afterward. The doses given to patients ranged from 1x108 cells/m2 to 1x1010 cells/m2 per infusion. Patients received between two and three infusions over a period of less than a week. About one-third of the treated patients had clinically meaningful responses with some of them fully recovering. Comparison to other NK cells In a 2017 study by Congcong Zhang and Winfried S. Wels, NK-92 cells were genetically engineered to recognize and kill specific human cancers by expressing chimeric antigen receptors (CARs). CAR-engineered T-lymphocytes (CAR-T) have garnered attention in immuno-oncology, as the infusion of CAR-T cells has been shown to induce remissions in some patients with acute and chronic leukemia and lymphoma. However, CAR-T cells can cause cytokine release syndrome (CRS). CAR-engineered NK cells from either peripheral or cord blood have not proved to be as feasible for use to treat diseases as they are difficult to expand to get sufficient numbers, and the yields can be variable and/or too low. Also, genetic transduction to introduce the CAR into blood NK cells requires lentiviral or retroviral vectors, which are only moderately efficient. NK-92 cells, in contrast to NK-92 CAR-T cells, have predictable expansion kinetics and can be grown in bioreactors that produce billions of cells within a couple of weeks. Further, NK-92 cells can easily be transduced by physical methods, and mRNA can be shuttled into NK-92 cells with high efficiency. CAR-expressing NK-92 have been generated to target a number of cancer surface receptors such as programmed death domain ligand 1 (PD-L1), CD19 (a type of B cell receptor), human epidermal growth factor receptor 2 (HER2/ErbB2) and epidermal growth factor receptor (EGFR, aka HER1); and many of these engineered NK-92 cells are currently in clinical trials for the treatment of cancer. NK-92 variants NK-92 cells, which require interleukin-2 (IL-2) for growth, have also been genetically altered with an IL-2 gene to allow them to grow in culture without the addition of IL-2. They have also been engineered to express a high-affinity Fc-receptor which is the main receptor for monoclonal antibodies to bind to NK-92 and use their cytotoxic load to kill cancer cells. The cells have been further engineered to express Chimeric Antigen Receptors (CARs) such as programmed death domain ligand 1 (PD-L1). During the course of development, NK-92 cells were renamed activated NK cells (aNK) and the different variants have been designated as follows: NK-92 = parental cells, later designated aNK NK-92ci = NK-92 cells transfected with an episomal vector for expression of IL-2 NK-92 mi = NK-92 cells transfected with an MFG vector for expression of IL-2 haNK = NK-92 (aNK) transfected with a plasmid expressing high affinity CD16 FcR and erIL-2 taNK =  NK-92 (aNK) transfected with either a plasmid or lentiviral vector expressing a CAR t-haNK = NK-92 (aNK) transfected with a plasmid expressing a CAR and CD16 FcR erIL-2 qt-haNK = NK-92 (aNK) transfected with a plasmid expressing a 4th gene in addition to a CAR, the CD16 FcR, and erIL-2: examples: homing receptor of the CXCR family or immune-active cytokines The high affinity Fc-receptor-expressing NK (haNK) cells were administered to patients with advanced Merkel cell carcinoma (MCC) and there were some notable responses. Currently, a HER2-targeted aNK (taNK) line and various t-haNK (CAR and Fc-receptor expressing) cell lines are in clinical trials in patients with various cancers, as described in the review “The NK-92 cell line 30 years later: its impact on natural killer cell research and treatment of cancer." Ownership and Licenses Global rights to the NK-92 cell line were assigned to ImmunityBio Inc. (formerly NantKwest, Inc.). ImmunityBio's only authorized NK-92 distributor is Brink Biologics, Inc. (San Diego), which makes NK-92 cells and certain genetically modified CD16+ variants available to third parties for non-clinical research under a limited use license agreement. References External links Cellosaurus entry for NK-92 Human cell lines Cancer treatments Biotechnology Genetic engineering
NK-92
Chemistry,Engineering,Biology
1,454
7,800,566
https://en.wikipedia.org/wiki/Surface%20photovoltage
Surface photovoltage (SPV) measurements are a widely used method to determine the minority carrier diffusion length of semiconductors. Since the transport of minority carriers determines the behavior of the p-n junctions that are ubiquitous in semiconductor devices, surface photovoltage data can be very helpful in understanding their performance. As a contactless method, SPV is a popular technique for characterizing poorly understood compound semiconductors where the fabrication of ohmic contacts or special device structures may be difficult. Theory As the name suggests, SPV measurements involve monitoring the potential of a semiconductor surface while generating electron-hole pairs with a light source. The surfaces of semiconductors are often depletion regions (or space charge regions) where a built-in electric field due to defects has swept out mobile charge carriers. A reduced carrier density means that the electronic energy band of the majority carriers is bent away from the Fermi level. This band-bending gives rise to a surface potential. When a light source creates electron-hole pairs deep within the semiconductor, they must diffuse through the bulk before reaching the surface depletion region. The photogenerated minority carriers have a shorter diffusion length than the much more numerous majority carriers, with which they can radiatively recombine. The change in surface potential upon illumination is therefore a measure of the ability of minority carriers to reach the surface, namely the minority carrier diffusion length. As always in diffusive processes, the diffusion length is approximately related to the lifetime by the expression , where is the diffusion coefficient. The diffusion length is independent of any built-in fields in contrast to the drift behavior of the carriers. Note that the photogenerated majority carriers will also diffuse towards the surface but their number as a fraction of the thermally generated majority carrier density in a moderately doped semiconductor will be too small to create a measurable photovoltage. Both carrier types will also diffuse towards the rear contact where their collection can confuse interpretation of the data when the diffusion lengths are larger than the film thickness. In a real semiconductor, the measured diffusion length includes the effect of surface recombination, which is best understood through its effect on carrier lifetime: where is the effective carrier lifetime, is the bulk carrier lifetime, is the surface recombination velocity and is the film or wafer thickness. Even for well characterized materials, uncertainty about the value of the surface recombination velocity reduces the accuracy with which the diffusion length can be determined for thinner films. Experimental methods Surface photovoltage measurements are performed by placing a wafer or sheet film of a semiconducting material on a ground electrode and positioning a kelvin probe a small distance above the sample. The surface is illuminated with light of fixed wavelength in industrial applications or with light whose wavelength is scanned using a monochromator so as to vary the absorption depth of the photons. The deeper in the semiconductor that carrier generation occurs, the fewer the number of minority carriers that will reach the surface and the smaller the photovoltage. On a semiconductor whose spectral absorption coefficient is known, the minority carrier diffusion length can in principle be extracted from a measurement of photovoltage versus wavelength. The optical properties of a novel semiconductor may not be well known or may not be homogeneous across the sample. The temperature of the semiconductor must be carefully controlled during an SPV measurement test thermal drift complicate the comparison of different samples. Typically SPV measurements are done in an AC-coupled fashion using a chopped light source rather than a vibrating Kelvin probe. Significance The minority carrier diffusion length is critical in determining the performance of devices such as photoconducting detectors and bipolar transistors. In both cases the ratio of the diffusion length to the device dimensions determines the gain. In photovoltaic devices, photodiodes and field-effect transistors, the drift behavior due to built-in fields is more important under typical conditions than the diffusive behavior. Even so the SPV is a convenient method of measuring the density of impurity-derived recombination centers that limit device performance. SPV is performed both as an automated and routine test of material quality in a production environment and as an experimental tool to probe the behavior of less well studied semiconducting materials. Time-resolved photoluminescence is an alternate contactless method of determining minority carrier transport properties. See also Kelvin probe force microscope Photo-reflectance Scanning Kelvin probe References External links Freiberg Instruments vendor of industrial and scientific SPV and Minority Carrier Lifetime measurement systems Semilab vendor of commercial SPV and Minority Carrier Lifetime measurement systems KP Technology vendors of and consultants about Kelvin probes ASTM standard F391-96 "Standard Test Methods for Minority Carrier Diffusion Length in Extrinsic Semiconductors by Measurement of Steady-State Surface Photovoltage" Semiconductor analysis Condensed matter physics
Surface photovoltage
Physics,Chemistry,Materials_science,Engineering
983
3,666,628
https://en.wikipedia.org/wiki/Zweikanalton
Zweikanalton ("two-channel sound") or A2 Stereo, is an analog television sound transmission system used in Germany, Austria, Australia, Switzerland, Netherlands and some other countries that use or used CCIR systems. South Korea utilized a modified version of Zweikanalton for the NTSC analog television standard. It relies on two separate FM carriers. This offers a relatively high separation between the channels (compared to a subcarrier-based multiplexing system) and can thus be used for bilingual broadcasts as well as stereo. Unlike the competing NICAM standard, Zweikanalton is an analog system. Zweikanalton can be adapted to any existing analogue television system, and modern PAL or SECAM television receivers generally include a sound detector IC that can decode both Zweikanalton and NICAM. Technical details A 2nd FM sound carrier containing a second sound channel is transmitted at a frequency 242 kHz higher than the default FM sound carrier, and contains a 54.6875 kHz pilot tone to indicate whether the broadcast is mono, stereo or bilingual. This pilot tone is 50% amplitude-modulated with 117.5 Hz for stereo or 274.1 Hz for bilingual. The absence of this carrier indicates normal mono sound. Zweikanalton can carry either a completely separate audio program, or can be used for stereo sound transmission. In the latter case, the first FM carrier carries (L+R) for compatibility, while the second carrier carries R (not L-R.) After combining the two channels, this method improves the signal-to-noise ratio by reducing the correlated noise between the channels. Carrier frequencies are chosen so that they cause minimal interference to the picture. The difference between the two sound carriers is 15.5 times the line frequency (15.5 x 15625 Hz = 242187.5 Hz) which, being an odd multiple of half line frequency, reduces the visibility of intermodulation products between the two carriers. The pilot tone frequency is 3.5 times line frequency (54687.5 Hz). The modulated tone frequency is 117.50 Hz for stereo transmission and 274.1 Hz for bilingual transmission. Absence of this tone is interpreted as a monaural transmission. a.The second sound carrier frequency of DK systems varies from country, and sometimes manufacturers divide them into DK1/DK2/DK3 systems. b.The video bandwidth is reduced. System M variant There is a modified version of Zweikanalton used in South Korea, compatible with the NTSC System M standard of TV transmission. In this case the second FM carrier is 14.25 times the line frequency, or about 224 kHz, above the first carrier; pre-emphasis is 75 microseconds; the stereo pilot tone frequency is 149.9 Hz; the bilingual pilot tone frequency is 276 Hz; and the second channel carries L-R (not R). History Zweikanalton was developed by the (IRT) in Munich during the 1970s, and was first introduced on the German national television channel ZDF on 13 September 1981. The German public broadcaster ARD subsequently introduced Zweikanalton on its Das Erste channel on 29 August 1985 in honour of the 1985 edition of the Internationale Funkausstellung Berlin (IFA). West Germany thus became the first country in Europe to use multiplexed sound on its television channels. In Malaysia, TV3 used Zweikanalton on its UHF analogue transmission frequency (Channel 29), while NICAM was instead used on its VHF analogue transmission frequency (Channel 12). In Indonesia, the first TV station to use Zweikanalton was SCTV, which utilized from its start of broadcasting in 1990. Later, Zweikanalton was abandoned by national networks in favor of NICAM but at least there was one local television station that still used Zweikanalton. As a result of the analogue television switch-off in most countries which used Zweikanalton, Zweikanalton is now considered obsolete and has been replaced with MPEG-2 and/or MPEG-4 for countries that have converted to DVB-T/DVB-T2 (Europe and Asia-Pacific), and Dolby Digital AC-3 on ATSC in South Korea. Other names Zweikanalton is known by a variety of names worldwide. Most commonly used names are Zweiton, German Stereo, A2 Stereo, West German Stereo and IGR Stereo. See also Multichannel Television Sound (3 additional audio channels on 4.5 MHz audio carriers) NICAM EIAJ MTS Notes and references Broadcast engineering Television technology
Zweikanalton
Technology,Engineering
959
2,341,198
https://en.wikipedia.org/wiki/Merge%20%28version%20control%29
In version control, merging (also called integration) is a fundamental operation that reconciles multiple changes made to a version-controlled collection of files. Most often, it is necessary when a file is modified on two independent branches and subsequently merged. The result is a single collection of files that contains both sets of changes. In some cases, the merge can be performed automatically, because there is sufficient history information to reconstruct the changes, and the changes do not conflict. In other cases, a person must decide exactly what the resulting files should contain. Many revision control software tools include merge capabilities. Types of merges There are two types of merges: unstructured and structured. Unstructured merge Unstructured merge operates on raw text, typically using lines of text as atomic units. This is what Unix tools (diff/patch) and CVS tools (SVN, Git) use. This is limited, as a line of text does not represent the structure of source code. Structured merge Structured merge tools, or AST merge, turn the source code into a fully resolved AST. This allows for a fine-grained merge that avoid spurious conflicts. Workflow Automatic merging is what version control software does when it reconciles changes that have happened simultaneously (in a logical sense). Also, other pieces of software deploy automatic merging if they allow for editing the same content simultaneously. For instance, Wikipedia allows two people to edit the same article at the same time; when the latter contributor saves, their changes are merged into the article instead of overwriting the previous set of changes. Manual merging is what people have to resort to (possibly assisted by merging tools) when they have to reconcile files that differ. For instance, if two systems have slightly differing versions of a configuration file and a user wants to have the good stuff in both, this can usually be achieved by merging the configuration files by hand, and picking the wanted changes from both sources (this is also called two-way merging). Manual merging is also required when automatic merging runs into a change conflict; for instance, very few automatic merge tools can merge two changes to the same line of code (say, one that changes a function name, and another that adds a comment). In these cases, revision control systems resort to the user to specify the intended merge result. Merge algorithms There are many different approaches to automatic merging, with subtle differences. The more notable merge algorithms include three-way merge, recursive three-way merge, fuzzy patch application, weave merge, and patch commutation. Three-way merge A three-way merge is performed after an automated difference analysis between a file "A" and a file "B" while also considering the origin, or common ancestor, of both files "C". It is a rough merging method, but widely applicable since it only requires one common ancestor to reconstruct the changes that are to be merged. Three way merge can be done on raw text (sequence of lines) or on structured trees. The three-way merge looks for sections which are the same in only two of the three files. In this case, there are two versions of the section, and the version which is in the common ancestor "C" is discarded, while the version that differs is preserved in the output. If "A" and "B" agree, that is what appears in the output. A section that is the same in "A" and "C" outputs the changed version in "B", and likewise a section that is the same in "B" and "C" outputs the version in "A". Sections that are different in all three files are marked as a conflict situation and left for the user to resolve. Three-way merging is implemented by the ubiquitous diff3 program, and was the central innovation that allowed the switch from file-locking based revision control systems to merge-based revision control systems. It is extensively used by the Concurrent Versions System (CVS). Recursive three-way merge Three-way merge based revision control tools are widespread, but the technique fundamentally depends on finding a common ancestor of the versions to be merged. There are awkward cases, particularly the "criss-cross merge", where a unique last common ancestor of the modified versions does not exist. Fortunately, in this case it can be shown that there are at most two possible candidate ancestors, and recursive three-way merge constructs a virtual ancestor by merging the non-unique ancestors first. This merge can itself suffer the same problem, so the algorithm recursively merges them. Since there is a finite number of versions in the history, the process is guaranteed to eventually terminate. This technique is used by the Git revision control tool. (Git's recursive merge implementation also handles other awkward cases, like a file being modified in one version and renamed in the other, but those are extensions to its three-way merge implementation; not part of the technique for finding three versions to merge.) Recursive three-way merge can only be used in situations where the tool has knowledge about the total ancestry directed acyclic graph (DAG) of the derivatives to be merged. Consequently, it cannot be used in situations where derivatives or merges do not fully specify their parent(s). Fuzzy patch application A patch is a file that contains a description of changes to a file. In the Unix world, there has been a tradition to disseminate changes to text files as patches in the format that is produced by "diff -u". This format can then be used by the patch program to re-apply (or remove) the changes into (or from) a text file, or a directory structure containing text files. However, the patch program also has some facilities to apply the patch into a file that is not exactly similar as the origin file that was used to produce the patch. This process is called fuzzy patch application, and results in a kind of asymmetric three-way merge, where the changes in the patch are discarded if the patch program cannot find a place in which to apply them. Like CVS started as a set of scripts on diff3, GNU arch started as a set of scripts on patch. However, fuzzy patch application is a relatively untrustworthy method, sometimes misapplying patches that have too little context (especially ones that create a new file), sometimes refusing to apply deletions that both derivatives have done. Patch commutation Patch commutation is used in Darcs to merge changes, and is also implemented in git (but called "rebasing"). Patch commutation merge means changing the order of patches (i.e. descriptions of changes) so that they form a linear history. In effect, when two patches are made in the context of a common situation, upon merging, one of them is rewritten so that it appears to be done in the context of the other. Patch commutation requires that the exact changes that made derivative files are stored or can be reconstructed. From these exact changes it is possible to compute how one of them should be changed in order to rebase it on the other. For instance, if patch A adds line "X" after line 7 of file F and patch B adds line "Y" after line 310 of file F, B has to be rewritten if it is rebased on A: the line must be added on line 311 of file F, because the line added in A offsets the line numbers by one. Patch commutation has been studied a great deal formally, but the algorithms for dealing with merge conflicts in patch commutation still remain open research questions. However, patch commutation can be proven to produce "correct" merge results where other merge strategies are mostly heuristics that try to produce what users want to see. The Unix program flipdiff from the "patchutils" package implements patch commutation for traditional patches produced by diff -u. Weave merge Weave merge is an algorithm that does not make use of a common ancestor for two files. Instead, it tracks how single lines are added and deleted in derivative versions of files, and produces the merged file on this information. For each line in the derivative files, weave merge collects the following information: which lines precede it, which follow it, and whether it was deleted at some stage of either derivative's history. If either derivative has had the line deleted at some point, it must not be present in the merged version. For other lines, they must be present in the merged version. The lines are sorted into an order where each line is after all lines that have preceded it at some point in history, and before all lines that have followed it at some point in history. If these constraints do not give a total ordering for all lines, then the lines that do not have an ordering with respect to each other are additions that conflict. Weave merge was apparently used by the commercial revision control tool BitKeeper and can handle some of the problem cases where a three-way merge produces wrong or bad results. It is also one of the merge options of the GNU Bazaar revision control tool, and is used in Codeville. See also Comparison of file comparison tools diff Branching (revision control) References Configuration management Version control
Merge (version control)
Engineering
1,907
25,438,418
https://en.wikipedia.org/wiki/Dushnik%E2%80%93Miller%20theorem
In mathematics, the Dushnik–Miller theorem is a result in order theory stating that every countably infinite linear order has a non-identity order embedding into itself. It is named for Ben Dushnik and E. W. Miller, who proved this result in a paper of 1940; in the same paper, they showed that the statement does not always hold for uncountable linear orders, using the axiom of choice to build a suborder of the real line of cardinality continuum with no non-identity order embeddings into itself. In reverse mathematics, the Dushnik–Miller theorem for countable linear orders has the same strength as the arithmetical comprehension axiom (ACA0), one of the "big five" subsystems of second-order arithmetic. This result is closely related to the fact that (as Louise Hay and Joseph Rosenstein proved) there exist computable linear orders with no computable non-identity self-embedding. See also Cantor's isomorphism theorem Laver's theorem References Order theory
Dushnik–Miller theorem
Mathematics
222
47,622,833
https://en.wikipedia.org/wiki/Gomphidius%20maculatus
Gomphidius maculatus is an edible mushroom in the family Gomphidiaceae that is found in Europe and North America. It was first described scientifically by naturalist Giovanni Antonio Scopoli in 1772. Elias Magnus Fries transferred it to the genus Gomphidius in 1838, giving it the name by which it is known today. The specific epithet maculatus is derived from the Latin word for "spotted". References External links Boletales Edible fungi Fungi described in 1772 Fungi of Europe Fungi of North America Fungus species
Gomphidius maculatus
Biology
109
7,454,381
https://en.wikipedia.org/wiki/Alikhan%20Bukeikhanov
Alikhan Nurmukhameduly Bukeikhan (5 March 1866 – 27 September 1937) was a Kazakh politician and publisher who served as the Chairman (Prime Minister) of the Kazakh Provisional National Government of Alash Orda and one of the leaders of the Alash party from late 1917 to 1920. Early life Alikhan Bukeikhanov was born into a Kazakh Muslim family on 5 March 1866, in Tokyrauyn Volost, Russian Empire. He was the son of Nurmuhammed Bukeikhanov and as a great-grandson of Barak Sultan, former khan of the Orta zhuz, he was a direct descendant of Genghis Khan. Bukeikhanov graduated from the Russian-Kazakh School and Omsk Technical School in 1890. He later studied at the Saint Petersburg Forestry Institute, where he graduated from the Faculty of Economics in 1894. During Bukeikhanov's youth, it is believed that he was influenced by socialists. Upon graduating, Bukeikhanov returned to Omsk and spent the next fourteen years there working. From 1895 to 1897, he worked as a math teacher in the Omsk school for Kazakh children. Bukeikhanov was a participant in the 1896 Shcherbina Expedition, which aimed to research and assess virtually every aspect of Russian Central Asia's environment and resources to the culture and traditions of its inhabitants. This was the first of a few similar missions which he accepted. Among his recorded contributions were "Ovtsevodstvo v stepnom krae" ("Sheep-Breeding in the Steppe Land"), which analyzed animal husbandry in Central Asia. Bukeikhanov was the first biographer of Abay Kunanbayev, publishing an obituary in Semipalatinsky listok in 1905. In 1909, he published a collection of Kunanbayev's works. Political life In 1905, Bukeikhanov's political activism began when he joined the Constitutional Democratic Party. In late 1905 at the Uralsk Oblast Party Congress, he tried to create the Kazakh Democratic party but failed. As a result of this action, he was arrested and prohibited from living in the Steppe Oblasts. During his exile, he relocated to Samara. He was elected to the State Duma of the Russian Empire as a member of that party in 1906 and signed the Vyborg petition to protest the dissolution of the Duma by the tsar. In 1908, he was arrested again and exiled in Samara until 1917. While in Samara, he participated in the Samara Guberniya Committee of the People's Freedom Party set up in 1915. Author of the idea of the First Kazakh Republic In April 1917, Bukeikhanov, Akhmet Baitursynov and several other native political figures took the initiative to convene an All-Kazakh Congress in Orenburg. In its resolution, Congress urged the return to the native population of all the lands confiscated from it by the previous regime and the expulsion of all the new settlers from the Kazakh-Kirghiz territories. Other resolutions demanded the transfer of the local schools into native hands and the termination of the recruitment introduced in 1916. Within the group, Bukeikhanov, along with Russian liberals, chiefly the Kadets sought to direct attention first to economic problems, whereas others sought to unite the Kazakhs with the other Turkic peoples of Russia. Three months later, another Kazakh-Kirghiz Congress met in Orenburg. There, for the first time, the idea of territorial autonomy emerged, and a national Kazakh-Kirghiz political party was formed, the Alash Autonomy. Before the February Revolution, Bukeikhanov collaborated with the Kadets in the hope of getting autonomous status for Kazakhs and contacted the head of the Russian Provisional Government Alexander Kerensky. Kerensky proceeded to make Bukeikhanov a commissar. On 19 March 1917, he was appointed as the Provisional Government Commissioner of Turgay Oblast. After the October Revolution, he was elected in 1917 as president of the Alash Orda government of Alash Autonomy. In 1920, after the establishment of Soviet hegemony, Bukeikhanov joined the Bolshevik party and returned to scientific life. His earlier political activities caused the authorities to view him with suspicion, leading to arrests in 1926 and 1928. In 1926, Bukeikhanov was arrested on the charge of counter-revolutionary activity and put into the Butyrka prison in Moscow. But due to the lack of evidence in the criminal case against him, he was released from prison. In 1930, the authorities banished him to Moscow, where he was arrested a final time in 1937 and executed. It was not until 1989 that the Soviet authorities rehabilitated him. Writings Bukeikhanov's major political publication was "Kirgizy" ("The Kazakhs") (1910), which was released in the Constitutional Democratic party book on nationalities edited by A. I. Kosteliansky. Bukeikhanov's other activities of this period included assisting in the creation of Qazaq, a Kazakh language newspaper, and writing articles for newspapers, including "Dala Walayatynyng Gazeti" (Omsk), "Orenburgskii Listok", "Semipalatinskii Listok", "Turkestanskie Vedomosti" (Tashkent), "Stepnoi Pioner" (Omsk), and "Sary-Arqa" (Semipalatinsk). He was also a contributor to Ay Qap and "Sibirskie Voprosy". Explanatory notes References Sources External links The Geography of Civilizations: A Spatial Analysis of the Kazakh Intelligentsia's activities, From the Mid-Nineteenth to the Early Twentieth Century |- |- 1866 births 1937 deaths Kazakh writers from the Russian Empire 19th-century writers from the Russian Empire People from Karaganda Region People from Semipalatinsk Oblast Russian Constitutional Democratic Party members Members of the 1st State Duma of the Russian Empire Environmental scientists Kazakh-language writers Kazakhstani scientists Members of the Grand Orient of Russia's Peoples Saint-Petersburg State Forestry University alumni Executed politicians Great Purge victims from Kazakhstan Alash Autonomy Muslims from the Russian Empire Inmates of Butyrka prison
Alikhan Bukeikhanov
Environmental_science
1,280
54,888,507
https://en.wikipedia.org/wiki/Jib%20%28crane%29
A jib or jib arm is the horizontal or near-horizontal beam used in many types of crane to support the load clear of the main support. An archaic spelling is gib. Usually jib arms are attached to a vertical mast or tower or sometimes to an inclined boom. In other jib-less designs such as derricks, the load is hung directly from a boom which is often anomalously called a jib. A camera jib or jib arm in cinematography is a small crane that holds nothing but the camera. References Mechanical engineering
Jib (crane)
Physics,Engineering
115
472,823
https://en.wikipedia.org/wiki/Syskey
The SAM Lock Tool, better known as Syskey (the name of its executable file), is a discontinued component of Windows NT that encrypts the Security Account Manager (SAM) database using a 128-bit RC4 encryption key. Introduced in the Q143475 hotfix for Windows NT 4.0 SP3, the tool was removed in Windows 10's Fall Creators Update in 2017 because its method of cryptography is considered insecure by modern standards and the fact that the tool has been widely employed in scams as a form of ransomware. Microsoft officially recommended use of BitLocker disk encryption as an alternative. History Introduced in the Q143475 hotfix included in Windows NT 4.0 SP3, Syskey was intended to protect against offline password cracking attacks by preventing the possessor of an unauthorized copy of the SAM file from extracting useful information from it. Syskey can optionally be configured to require the user to enter the key during boot (as a startup password) or to load the key onto removable storage media (e.g., a floppy disk or USB flash drive). In mid-2017, Microsoft removed syskey.exe from future versions of Windows. Microsoft recommends using "BitLocker or similar technologies instead of the syskey.exe utility." Security issues The "Syskey Bug" In December 1999, a security team from BindView found a security hole in Syskey that indicated that a certain form of offline cryptanalytic attack is possible, making a brute force attack appear to be possible. Microsoft later issued a fix for the problem (dubbed the "Syskey Bug"). The bug affected both Windows NT 4.0 and pre-RC3 versions of Windows 2000. Use as ransomware Syskey is commonly abused by technical support scammers to lock victims out of their own computers in order to coerce them into paying a ransom. See also LM hash pwdump References Cryptographic software Microsoft Windows security technology Windows administration
Syskey
Mathematics
423
46,775,635
https://en.wikipedia.org/wiki/Oxitec
Oxitec is a British biotechnology company that develops genetically modified insects in order to improve public health and food security through insect control. The insects act as biological insecticides. Insects are controlled without the use of chemical insecticides. Instead, the insects are genetically engineered to be unable to produce offspring. The company claims that this technology is more effective than insecticides and more environmentally friendly. History Oxitec was founded in 2002 as Oxford Insect Technologies in the United Kingdom by Luke Alphey and David Kelly, working with Oxford University's Isis Innovation technology transfer company. In August 2015, Oxitec was purchased by U.S.-based Intrexon for $160 million, and by US-based Third Security in early 2020. The company's first engineered insect was the pink bollworm (Pectinophora gossypiella). It was experimentally released in Arizona in 2006. It then modified Aedes egyptii, followed by a series of field trials in multiple countries. Grey Frandsen was appointed CEO in 2017. He is an American who led start-up initiatives in the U.S. government and the private and non-profit sectors on matters relating to national and global public health security, biotechnology and crisis response. Frandsen led the company's transition to its 2nd generation technology in 2018. During the 2010s, Oxitec established partnerships with agricultural industry leaders and the Bill & Melinda Gates Foundation. Frandsen was named one of Malaria No More's 10-to-End innovators in 2019. Genetically modified insects Oxitec's borrowed on considerable existing research that genetically modifying insects could disrupt their ability to reproduce and over time, reduce their populations. Oxitec has developed genetically modified versions of A. aegypti, P. gossypiella. Its OX513A strain alters males to produce protein tTA, which negatively affects cell development. OX5034 male offspring survive, allowing mating cycles that further reduce the population. In each generation fewer males pass on their self-limiting genes. OX5034 males were expected to disappear from the environment 10 generations after releases stop. Modified males mate with wild females. The self-limiting gene prevents female offspring from surviving. The engineered gene based on elements found in E. coli and the herpes simplex virus, causes the female offspring’s cells to produce tTAV protein. Projects Grand Cayman The first field trials were performed on Grand Cayman, starting in 2009. Approximately 3.3 million transgenic male A. aegypti were released. The experiments demonstrated that the animals were able to survive in this environment and produce offspring. Some eleven weeks after the release, the observed A. aegypti population declined about 80%. The tests were deemed a scientific success, but criticism emerged over communication policy. In May 2016 Grand Cayman announced a program to use Oxitec mosquitoes. The first phase informed the community about the programme. The next phase treated an area with about 1,800 residents in West Bay. 88% fewer A. aegypti eggs were observed compared to an equivalent untreated area. In November 2018, the Cayman Islands government elected to cease any new field trial agreements with Oxitec, citing cost-benefit concerns with the technologies as the primary concern. Health Minister Dwayne Seymour and other legislators expressed skepticism on-the-record about the trials' effectiveness. However, Oxitec and the Mosquito Research and Control Unit of the Cayman Islands continue to analyze the data collected over the 10 year project. Brazil In 2011 Oxitec conducted a field test in cooperation with the company Moscamed and the University of São Paulo. The observed population declined by 80–95%. In July 2015, Oxitec released results of a test in the Juazeiro region of Brazil to fight Dengue, Chikungunya and Zika viruses. They concluded that mosquito populations were reduced by about 95%. It was used to try to combat Zika in Piracicaba, São Paulo in 2016. A 2013 OX513A project in Jacobina in the state of Bahia Some 450,000 males were released every week for 27 months. Wild populations were studied before the program began and at intervals of 6, 12 and 27 to 30 months. Another OX513A field test began on 23 May 2018 in Indaiatuba, a municipality in the state of São Paulo. The company announced the trial's results in June 2019, reporting that the project reduced the mosquito population by 79%. A 2019 outside study reported that genes characteristic of the altered males had entered the wild population. Oxitec put out a statement, citing concern with the paper's "misleading and speculative statements". The company's statement included rebuttals directed of some of its claims. All of these were confirmed by Scientific Reports and Nature Magazine in March 2020 in an Editorial Expression of Concern. It was reported that some of the authors claimed that they had not approved the version that was submitted for publication. Several critics responded to the paper, including entomologist Jason Rasgon of Pennsylvania State University, who stated that the finding was important, but that some claims were overstated and irresponsible. A 2018-2019 Indaiatuba study of four densely populated neighborhoods with high levels of Aedes aegypti reported that mosquito populations declined an average 88 percent over 11 months in those neighborhoods. In two, scientists released 100 male mosquito eggs per resident per week and 500 in the others, reporting that the smaller numbers were as effective as the higher ones. Boxes filled with eggs are available for home and business use. Malaysia Field trials were carried out in Malaysia in 2015. Panama Field trials were conducted in Panama in 2016. United States Arizona The company released an engineered pink bollworm (Pectinophora gossypiella) in Arizona in 2006. Florida A 2016 field trial planned in Florida was cancelled. Oxitec was invited to the Florida Keys in the early 2010s. The company conducted extensive community engagement. A November 2016 referendum showed overwhelming local support for the project to release genetically engineered male mosquitos. 31 out of 33 Monroe County precincts voted in favor. The company established waitlists due to resident interest in hosting mosquito boxes. Some residents opposed the project, worrying about bites by the mosquitoes (male mosquitoes do not have the mouthparts to bite). Others were unhappy about becoming a test site, with some threatening to derail the experiments by filling the mosquito boxes with bleach. In 2020, Oxitec's OX5034 mosquito was approved for release by state and federal authorities for use in Florida. In April 2021, boxes containing mosquito eggs were placed at six locations. Once they hatched, about 12,000 males were expected weekly over the following 12 weeks, totaling 144k. In the second phase, nearly 20 million mosquitoes were expected over 16 weeks. In 2022, 5 million mosquitoes were released. All female offspring that inherited the lethal gene were reported to have died before reaching adulthood. The company also reported that spread of the related mutations was limited to a small area. California In 2022, EPA officials approved the release of 2.4 billion males of A. aegypti in California's Central Valley through 2024. The project is a partnership with the Delta Mosquito and Vector Control district in Tulare county. It awaits approval by California pesticide regulators. Specimens cannot be released near any potential tetracycline sources (which allows females to develop), or within 500 meters of wastewater treatment facilities, commercial citrus, apple, pear, nectarine, peach growing areas, or livestock. Opponents include Friends of the Earth, the Institute for Responsible Tech and the Center for Food Safety who object to the lack of public data on the Florida trial and the technique's experimental status. Specimens have been identified in 21 of California's 58 counties. In 2022 Oxitec was seeking approval for a pilot release. Regulation OX513A was approved by Brazil's National Biosecurity Technical Commission (CTNBio) in April 2014. In January 2016 Brazil's National Biosafety Committee approved the release Oxitec mosquitos throughout their country. Brazil's health-regulatory agency, Anvisa, declared on 12 April 2016 that it would regulate Oxitec's mosquitoes. Anvisa announced that it was creating a legal framework for regulations. It requested Oxitec to demonstrate that its technology was safe and could reduce the transmission of mosquito-borne viruses. In 2020 Brazilian Biosafety Regulatory Authority CTNBio granted full commercial biosafety approval for Oxitec’s mosquitoes. Oxitec’s Florida Keys project was approved by federal and state regulators, including the U.S. Environmental Protection Agency (EPA) and the Florida Department of Agriculture and Consumer Services (FDACS). In August 2020, the Florida Keys Mosquito Control District (FKMCD) Board of Commissioners approved the project. The Netherlands agreed to release Oxitec's genetically modified mosquitoes to fight dengue fever, chikungunya and zika in Saba, a Dutch Caribbean island, after a report by The National Institute of Public Health and the Environment (RIVM) examined the effects that these mosquitoes could have in the local ecosystem and concluded the release of the mosquitoes would not pose risks to human health or the environment. The French High Council for Biology supported Oxitec mosquito releases in 2017. Criticism A 2019 study claimed that Oxitec's first generation A. aegypti (the redundant OX513A) had successfully hybridized with the local A. aegypti population. It was challenged by Oxitec and most of the study's co-authors. The study was found to be purely speculative and is now marked by its publisher with an Editorial Expression of Concern. See also Genetically modified organism Sterile insect technique References External links 2002 establishments in England Biotechnology companies established in 2002 Biotechnology companies of the United Kingdom Companies based in Oxfordshire Companies associated with the University of Oxford Insect-borne diseases Pest control Science and technology in Oxfordshire Vale of White Horse
Oxitec
Biology
2,074
26,738,212
https://en.wikipedia.org/wiki/Classic%20Game%20Room
Classic Game Room (commonly abbreviated CGR) is a video game review web series produced, directed, edited and hosted by Mark Bussler of Inecom, LLC. The show reviewed both retro and modern video games along with gaming accessories, pinball machines, and minutiae such as gaming mousepads and food products. The show broadcast its reviews via video-sharing website YouTube under the screen name 'Lord Karnage' until late 2013, when they moved to Dailymotion, citing issues with YouTube. In May 2014, via the Classic Game Room's Facebook Page and YouTube channel, it was announced that the show would again be posting episodes on YouTube. It also moved onto Patreon and Amazon Prime, before being cancelled in April 2019. After a four-year hiatus, the series returned to YouTube as Classic Game Room 2085 but was short lived due to declining viewer numbers not justifying the cost of making episodes. The series currently continues as Classic Game Room: The Podcast, where Mark talks about various aspects of games, music and general pop culture using his own signature flair of humor, insights and sometimes fantastical "real life" space stories. History The Game Room era (1999–2000) The show was originally titled The Game Room and presented by Mark Bussler and David Crosson. Founded by Bussler, it launched on November 7, 1999, on the internet startup website FromUSAlive. The pair had met at film school and shared a mutual love of movies and video games. At first, Bussler and Crosson planned to review mainly then-modern games, but after a segment on older games proved to be popular, the show began reviewing earlier titles. The show was run on a tight US$50 budget, so improvised special effects were used. However, the low-budget nature of the show led to slow episode production rates, and when revenue failed to cover the costs of running the show, The Game Room was canceled on October 23, 2000. Tokyo Xtreme Racer 2 for the Sega Dreamcast was the last game to be reviewed on the show. Crosson moved onto a career in pharmaceuticals, while Bussler would spend the next 8 years producing and directing documentaries on American history, such as Expo: Magic of the White City, and working with actors such as Gene Wilder and Richard Dreyfuss. The revival (HD) era (2008–2015) The show returned as Classic Game Room HD (HD standing for Heavy Duty according to Bussler) on February 20, 2008, hosted by Bussler. Crosson appeared at the end of the show's first episodes, Captain America and The Avengers, where Mark asked him what he thought of the game. On August 29, 2009, Bussler announced the launch of the Classic Game Room website ClassicGameRoom.net (now ClassicGameRoom.com) on the show's YouTube channel. The website hosts links and embedded videos to all the show's episodes as well as written reviews. Later, the site began hosting reviews written by fans of the show as well as linking to their videos. In May 2010, Inecom launched a second show titled CGR Undertow hosted by Derek Buck. Later he was joined by TJ and a rotating cast of other reviews. The show had reviewers give their own take on games reviewed by Mark as well as other games not reviewed on the main show. In early 2012, some Classic Game Room reviews were co-hosted by Derek and TJ. In late 2013, Classic Game Room left YouTube and began posting videos on Dailymotion. On May 8, 2014, Classic Game Room announced via its Facebook page and YouTube Channel that they will be returning to YouTube on May 10. Episodes first hosted on Dailymotion were added to their respective YouTube channels. The last episode posted on dailymotion was the review of Mario Kart 8. On November 2, 2015, Bussler announced that the show would highly slow its production following the end of the 2015 year. Changes would include the shutting down of the show store and its secondary channel CGR Undertow entirely ending production. Bussler stated that this is due to a change in his life and he would like to focus more on his writing and film making. He also said that he would continue the show as a hobby similar to how it began for him. CGR Mark 3, Patreon and Amazon (2016–2017) Bussler later opened a Patreon for the series at the recommendation of fans in order to keep the series operating as normal, but would be renamed Classic Game Room Mark 3. The first CGR Mk3 was released on January 8, 2016. During mid-2016, due to declining YouTube ad revenue, Bussler experimented with a premium content delivery system. Subscribers on Patreon received full length game reviews, dubbed "Hyper Cuts", and free streaming video services received significantly shortened preview-length versions of the same reviews – effectively creating a partial paywall. After this was met with negative reaction by fans, overall average runtime of the free streaming reviews returned to their normal length, with extended reviews available to Patreon subscribers. The extended reviews later became available on Amazon Prime in December that year. CGR Mark 4 (2017–2018) In June 2017, Bussler announced another update to Classic Game Room, intending to broaden the content variety that would encompass toys, comics and anything else Bussler fancied. Subsequent videos dropped the Mark 3 moniker. Classic Game Room 2085 (since 2018) On January 19, Bussler announced another new move off on YouTube and supposedly permanently onto Amazon Prime, under the new title of Classic Game Room 2085, for March. The new series would feature episodes far longer than YouTube, and cover a variety of games in each one. Bussler cited irrevocable differences and frustration with YouTube and its services as the contributing factor. Season 1 debuted on March 2. On February 5, 2019, Bussler said a second season was as yet undecided. Around the time of the main series' 2023 revival, however, Bussler announced there would be a second season of 2085 starting at the beginning of 2024. Season 2 began as of January 1, 2024. Classic Game Room Infinity (2018–2019) and second closure From December 2018, Bussler returned to YouTube (as well as to Instagram and TikTok) with a new show titled Classic Game Room Infinity, which focused on shorter, snappier content. On April 24, 2019, Bussler announced on Instagram that he had decided to end all video production, and to continue the Classic Game Room brand as a book publisher. Rebranding and Classic Game Room Year 24 CGR Publishing has distributed books on video games, video production and American history. The channel was briefly known as Turbo Volcano and sold t-shirt merchandise, before being rebranded as '80s Comics reviewing classic comic books and art supplies. After a year of inactivity, the channel was rebranded as CGR Publishing in 2022 and began posting new commercials for books, as well as episodes of Bussler's podcast. In mid-2023, the YouTube channel's name was once again changed back to "Classic Game Room". In June 2023, without making an official announcement, Bussler began uploading new game reviews to YouTube under the moniker. The first review was of the PS5 port of Ninja Golf, a game on the Atari 7800, and saw the return of the early HD era presentation, with Bussler only using narration over game footage. Other series CGR Interviews Bussler has also conducted a number of interviews with people involved with the video game industry, as part of the CGR Interviews series, such as his interview with Tommy Tallarico, video game soundtrack composer and founder of the Video Games Live concert series. In addition, Bussler has been interviewed for The Art of Community book. CGR Films A documentary film, Classic Game Room – The Rise and Fall of the Internet's Greatest Video Game Review Show, was released on August 28, 2007, on DVD. It is 100 minutes long and featured footage from a number of the original reviews and commentary from Bussler and Crosson. The film was directed by Bussler. In 2015, a second film, The Best of Classic Game Room: 15th Anniversary Collection, also directed by Bussler, was released on Blu-ray and DVD. It has a runtime of 280 minutes and features a collection of videos previously available on YouTube but also includes plenty of exclusive material including exclusive game reviews, an interview with Dave Crosson, a commentary track and more all wrapped in a comedic story arc involving time travel, robots, and clones. 2015 also saw the release of a compilation film from the sister channel CGR Undertow, A Great Big Bunch of CGR Undertow, on DVD. It is a collection of previously released reviews presented by Derek Buck and his clone. Special features include a mini documentary and a blooper reel. In 2016, Bussler announced a series of feature length video game reviews, the Classic Game Room Feature Reviews, beginning with MUSHA. The 90-minute reviews, funded by Kickstarter crowdfunding, would be in-depth analysis of games, covering everything from presentation to controllers, with comical elements. The following reviews were Herzog Zwei and Super Pac-Man. CGR Podcast In November 2021, Bussler launched a new podcast, CGR Podcast, featuring Turbo Volcano. Presented by Bussler, the show discussed behind-the-scenes work in publishing and retro pop culture. Episodes began to be uploaded to YouTube the following year, starting with Episode #8: Nobody Puts Truxton in the corner. See also Expo: Magic of the White City – a 2005 documentary by Bussler on the Chicago World's Fair, narrated by Gene Wilder. References Further reading External links Classic Game Room at the Internet Movie Database Classic Game Room: The Rise and Fall of the Internet's Greatest Video Game Review Show at the Internet Movie Database Internet properties established in 1999 Internet properties disestablished in 2000 Internet properties established in 2008 Video game news websites 2000s YouTube series American YouTubers American non-fiction web series Nostalgia Computing culture 2010s YouTube series 1999 establishments in Pennsylvania
Classic Game Room
Technology
2,080
1,867,492
https://en.wikipedia.org/wiki/Starchitect
Starchitect is a portmanteau used to describe architects whose celebrity and critical acclaim have transformed them into idols of the architecture world and may even have given them some degree of fame among the general public. Celebrity status is generally associated with avant-gardist novelty. Developers around the world have proven eager to sign up "top talent" (i.e., starchitects) in hopes of convincing reluctant municipalities to approve large developments, of obtaining financing or of increasing the value of their buildings. A key characteristic is that the starchitecture is almost always "iconic" and highly visible within the site or context. As the status is dependent on current visibility in the media, fading media status implies that architects lose "starchitect" status—hence a list can be drawn up of former "starchitects". The Bilbao effect Buildings are frequently regarded as profit opportunities, so creating "scarcity" or a certain degree of uniqueness gives further value to the investment. The balance between functionality and avant-gardism has influenced many property developers. For instance, architect-developer John Portman found that building skyscraper hotels with vast atriums—which he did in various U.S. cities during the 1980s—was more profitable than maximizing floor area. However, it was the rise of postmodern architecture during the late 1970s and early 1980s that gave rise to the idea that star status in the architectural profession was about an avant-gardism linked to popular culture—which, it was argued by postmodern critics such as Charles Jencks, had been derided by the guardians of a modernist architecture. In response, Jencks argued for "double coding"; i.e., that postmodernism could be understood and enjoyed by the general public and yet command "critical approval". The star architects from that period often built little or their best-known works were "paper architecture"—unbuilt or even unbuildable schemes, yet known through frequent reproduction in architectural magazines, such as the work of Léon Krier, Michael Graves, Aldo Rossi, Robert A. M. Stern, Hans Hollein, and James Stirling. As postmodernism went into decline, its avant-gardist credentials suffered due to its associations with vernacular and traditionalism, and celebrity shifted back towards modernist avant-gardism. But a high-tech strand of modernism persisted in parallel with a formally retrogressive post-modernism; one that often championed "progress" by celebrating, if not exposing, structure and systems engineering. Such technological virtuosity can be discovered during this time in the work of Norman Foster, Renzo Piano, and Richard Rogers, the latter two having designed the controversial Pompidou Centre (1977) in Paris, which opened to international acclaim. What this so‑called high-tech architecture showed was that an industrial aesthetic—an architecture characterized as much by urban grittiness as engineering efficiency—had popular appeal. This was also somewhat evident in so‑called deconstructionist architecture, such as the employment of chainlink fencing, raw plywood and other industrial materials in designs for residential and commercial architecture. Arguably the most notable practitioner along these lines, at least in the 1970s, is the now internationally renowned architect Frank Gehry, whose house in Santa Monica, California bears these characteristics. With urban generation from the turn of the twentieth century picking up, economists forecast that globalization and the powers of multinational corporations would shift the balance of power away from nation states towards individual cities, which would then compete with neighboring cities and cities elsewhere for the most lucrative modern industries, and which increasingly in major Western Europe and U.S. cities did not include manufacturing. Thus cities set about "reinventing themselves", giving precedence to the value given by culture. Municipalities and non-profit organizations hope the use of a starchitect will drive traffic and tourist income to their new facilities. With the popular and critical success of the Guggenheim Museum in Bilbao, Spain, by Frank Gehry, in which a rundown area of a city in economic decline brought in huge financial growth and prestige, the media started to talk about the so-called 'Bilbao effect'; a star architect designing a blue-chip, prestige building was thought to make all the difference in producing a landmark for the city. Similar examples are the Imperial War Museum North (2002), Greater Manchester, UK, by Daniel Libeskind, the Kiasma Museum of Contemporary Art, Helsinki, Finland, by Steven Holl, and the Seattle Central Library (2004), Washington state, United States, by OMA. The origin of the phrase "wow factor architecture" is uncertain, but has been used extensively in business management in both the UK and United States to promote avant-gardist buildings within urban regeneration since the late 1990s. It has even taken on a more scientific aspect, with money made available in the UK to study the significance of the factor. In research carried out in Sussex University, UK, in 2000, interested parties were asked to consider the "effect on the mind and the senses" of new developments. In an attempt to produce a "delight rating" for a given building, architects, clients and the intended users of the building were encouraged to ask: "What do passers‑by think of the building?", "Does it provide a focal point for the community?" The Design Quality Indicator has been produced by the UK Construction Industry Council, so that bodies commissioning new buildings will be encouraged to consider whether the planned building has "the wow factor" in addition to more traditional concerns of function and cost. The "wow factor" has also been taken up by Spanish architecture critics such as New York Times architecture critics Herbert Mushamp and Nicolai Ouroussof, in their arguments that the city needs to be "radically" reshaped by new towers. Discussing Spanish starchitect Santiago Calatrava's new skyscraper at 80 South Street near the foot of the Brooklyn Bridge, Ouroussof mentions how Calatrava's apartments are conceived as self-contained urban refuges, $30,000,000 prestige objects for the global elites: "If they differ in spirit from the Vanderbilt mansions of the past, it is only in that they promise to be more conspicuous. They are paradises for aesthetes." Historical overview of the status of architects The notion of giving celebrity status to architects is not new, but is contained within the general tendency, from the Renaissance onwards, to give status to artists. Until the modern era, artists in Western civilization were generally working under a patron – usually the Church or the rulers of the state – and their reputation could become commodified, such that their services could be bought by different patrons. One of the first records of celebrity status is artist-architect Giorgio Vasari's monograph Le vite de' più eccellenti pittori, scultori e architettori (in English, Lives of the Most Excellent Painters, Sculptors, and Architects), first published in 1550, recording the Italian Renaissance at the time of its flourishment. Vasari, himself under the patronage of Grand Duke Cosimo I de' Medici, even favoured architects from the city where he resided, Florence, attributing to them innovation, while barely mentioning other cities or places further away. The importance of Vasari's book was in the ability to consolidate reputation and status without people actually having to see the works described. The development of media has thus been equally of central importance to architectural celebrity as other walks of life. While status arising from patronage from the Church and State continued with the rise of Enlightenment and capitalism (e.g., the position of architect Christopher Wren in the patronage of the British Crown, the City of London, the Church of England and Oxford University during the 17th century), there was an expansion in artistic and architectural services available, each competing for commissions with the growth of industry and the middle-classes. Architects nevertheless remained essentially servants to their clients: while Romanticism and Modernism in the other arts encouraged individualism, progress in architecture was geared mostly to improvements in building performance (standards of comfort), engineering and the development of new building typologies (e.g., factories, railway stations, and later airports) and public benevolence (the problems of urbanization, public housing, overcrowding, etc.), yet allowing some architects to concern themselves with architecture as an autonomous art (as flourished with Art Nouveau and Art Deco). The heroes of modern architecture, in particular Le Corbusier, were seen as heroic for generating theories about how architecture should be concerned with the development of society. Such publicity also made it into the popular press: in the post-war era Time magazine occasionally featured architects on its front cover – for instance, in addition to Le Corbusier, Eero Saarinen, Frank Lloyd Wright, and Ludwig Mies van der Rohe. In more recent times Time magazine has also featured Philip Johnson, Peter Eisenman, Rem Koolhaas and Zaha Hadid. Eero Saarinen specialized in building headquarters for prestigious U.S. companies, such as General Motors, CBS, and IBM, and these companies used architecture to promote their corporate images: e.g., during the 1950s General Motors often photographed their new car models in front of their headquarters in Michigan. Corporations have continued to understand the value of bringing in starchitects to design their key buildings. For instance, the manufacturing company Vitra is well known for the works of notable architects that make up its premises in Weil am Rhein, Germany; including Zaha Hadid, Tadao Ando, SANAA, Herzog & de Meuron, Álvaro Siza, and Frank Gehry; as is the fashion house Prada for commissioning Rem Koolhaas to design their flagship stores in New York and Los Angeles. However, throughout history the greatest prestige has come with the design of public buildings – opera houses, libraries, townhalls, and especially museums, often referred to as the "new cathedrals" of our times. Measuring celebrity status Objectivity in the question of status would seem questionable. However, researchers at Clarkson University have used the method of Google hits to 'measure' the degree of celebrity status: "to establish a precise mathematical definition of fame, both in the sciences and the world at large". Prizes and the consolidation of reputation Although there are few architects well known to the general public, "starchitects" are held in the highest esteem by their professional colleagues and the professional media. Such status is marked not only by prestigious commissions but also by various prizes. For example, the Pritzker Prize, awarded since 1979, attempts to increase its own prestige by mentioning how its procedures are modeled on the Nobel Prize. In his 1979 book Architecture and its Interpretation, Juan Pablo Bonta put forward a theory about how buildings and architects achieve canonic status. He argued that a building and its architect achieve iconic or canonic status after a period when various critics and historians build up an interpretation that then becomes unquestioned for a significant period. If the text itself receives canonical status, then the status of the architect is further endorsed. For example, in the first edition of Siegfried Giedion's book Space Time and Architecture (1949) the Finnish architect Alvar Aalto was not mentioned at all. In the second edition he received more attention than any other architect, including Le Corbusier, who until then had been understood as the most important modernist architect. However, there is a difference between canonic status and "starchitect": as part of the "wow-factor" aspect of the term depends on current media visibility, it is used only to describe currently practicing architects: Frank Gehry Santiago Calatrava Álvaro Siza Massimiliano Fuksas Kazuyo Sejima and Ryue Nishizawa (SANAA) Sou Fujimoto David Childs (Skidmore, Owings & Merrill) Tadao Ando Norman Foster Jeanne Gang Nicholas Grimshaw Steven Holl Christoph Ingenhoven Toyo Ito Rem Koolhaas Daniel Libeskind Greg Lynn Winy Maas (MVRDV) Thom Mayne (Morphosis) Richard Meier Herzog & de Meuron João Luís Carrilho da Graça Rafael Moneo Jean Nouvel Renzo Piano Eduardo Souto de Moura William Pedersen (Kohn Pedersen Fox) Christian de Portzamparc Joshua Prince-Ramus (REX) Wolf D. Prix (Coop Himmelb(l)au) Robert Stern Richard Rogers Ben van Berkel (UNStudio) Bernard Tschumi Rafael Viñoly Peter Zumthor Bjarke Ingels (BIG) Kjetil Trædal Thorsen (Snøhetta) Former starchitects Josef Hoffmann Mimar Sinan Le Corbusier Antoni Gaudí Luis Barragán Lluís Domènech i Montaner Mario Botta Peter Eisenman Michael Graves Muzharul Islam Philip Johnson Ludwig Mies van der Rohe Oscar Niemeyer I. M. Pei Kevin Roche Eero Saarinen Robert Venturi Denise Scott Brown Frank Lloyd Wright Zaha Hadid Gae Aulenti Charles Gwathmey César Pelli (Pelli Clarke Pelli) Helmut Jahn Greene and Greene See also Boosterism References Tracking Turkey’s First Starchitect Architects
Starchitect
Engineering
2,771
60,447,324
https://en.wikipedia.org/wiki/Heavy%20fuel%20oil
Heavy fuel oil (HFO) is a category of fuel oils of a tar-like consistency. Also known as bunker fuel, or residual fuel oil, HFO is the result or remnant from the distillation and cracking process of petroleum. For this reason, HFO contains several different compounds that include aromatics, sulfur, and nitrogen, making emissions upon combustion more polluting compared to other fuel oils. HFO is predominantly used as a fuel source for marine vessel propulsion using marine diesel engines due to its relatively low cost compared to cleaner fuel sources such as distillates. The use and carriage of HFO on-board vessels presents several environmental concerns, namely the risk of oil spill and the emission of toxic compounds and particulates including black carbon. The use of HFOs is banned as a fuel source for ships travelling in the Antarctic as part of the International Maritime Organization's (IMO) International Code for Ships Operating in Polar Waters (Polar Code). For similar reasons, an HFO ban in Arctic waters is currently being considered. Heavy fuel oil characteristics HFO consists of the remnants or residual of petroleum sources once the hydrocarbons of higher quality are extracted via processes such as thermal and catalytic cracking. Thus, HFO is also commonly referred to as residual fuel oil. The chemical composition of HFO is highly variable due to the fact that HFO is often mixed or blended with cleaner fuels; blending streams can include carbon numbers from C20 to greater than C50. HFOs are blended to achieve certain viscosity and flow characteristics for a given use. As a result of the wide compositional spectrum, HFO is defined by processing, physical and final use characteristics. Being the final remnant of the cracking process, HFO also contains mixtures of the following compounds to various degrees: "paraffins, cycloparaffins, aromatics, olefins, and asphaltenes as well as molecules containing sulfur, oxygen, nitrogen and/or organometals". HFO is characterized by a maximum density of 1010 kg/m3 at 15°C, and a maximum viscosity of 700 mm2/s (cSt) at 50°C according to ISO 8217. Combustion and atmospheric reactions Given HFO's elevated sulfur contamination (maximum of 5% by mass), the combustion reaction results in the formation of sulfur dioxide SO2. Heavy fuel oil use and shipping Since the middle of the 20th century, HFO has been used primarily by the shipping industry due to its low cost compared with all other fuel oils, being up to 30% less expensive, as well as the historically lax regulatory requirements for emissions of nitrogen oxides (NOx) and sulfur dioxide (SO2) by the IMO. For these two reasons, HFO is the single most widely used engine fuel oil on-board ships. Data available until 2007 for global consumption of HFO at the international marine sector reports total fuel oil usages of 200 million tonnes, with HFO consumption accounting for 174 million tonnes. Data available until 2011 for fuel oil sales to the international marine shipping sector reports 207.5 million tonnes total fuel oil sales with HFO accounting for 177.9 million tonnes. Marine vessels can use a variety of different fuels for the purpose of propulsion, which are divided into two broad categories: residual oils or distillates. In contrast to HFOs, distillates are the petroleum products created through refining crude oil and include diesel, kerosene, naphtha and gas. Residual oils are often combined to various degrees with distillates to achieve desired properties for operational and/or environmental performance. Table 1 lists commonly used categories of marine fuel oil and mixtures; all mixtures including the low sulfur marine fuel oil are still considered HFO. Arctic environmental concerns The use and carriage of HFO in the Arctic is a commonplace marine industry practice. In 2015, over 200 ships entered Arctic waters carrying a total of 1.1 million tonnes of fuel with 57% of fuel consumed during Arctic voyages being HFO. In the same year, trends in carriage of HFO were reported to be 830,000 tonnes, representing a significant growth from the reported 400,000 tonnes in 2012. A report in 2017 by Norwegian Type Approval body Det Norske Veritas (DNV GL) calculated the total fuel use of HFO by mass in the Arctic to be over 75% with larger vessels being the main consumers. In light of increased area traffic and given that the Arctic is considered to be a sensitive ecological area with a higher response intensity to climate change, the environmental risks posed by HFO present concern for environmentalists and governments in the area. The two main environmental concerns for HFO in the Arctic are the risk of spill or accidental discharge and the emission of black carbon as a result of HFO consumption. Environmental impacts of heavy fuel oil spills Due to its very high viscosity and elevated density, HFO released into the environment is a greater threat to flora and fauna compared to distillate or other residual fuels. In 2009, the Arctic Council identified the spill of oil in the Arctic as the greatest threat to the local marine environment. Being the remnant of the distillation and cracking processes, HFO is characterized by an elevated overall toxicity compared to all other fuels. Its viscosity prevents breakdown into the environment, a property exacerbated by the cold temperatures in the Arctic resulting in the formation of tar-lumps, and an increase in volume through emulsification. Its density, tendency to persist and emulsify can result in HFO polluting both the water column and seabed. History of major HFO spills from 2000 onward The following HFO specific spills have occurred since the year 2000. The information is organized according to year and ship name and includes amount released and the spill location: 2000 Janra, Germany (40 tons in the Sea of Åland) 2001 Baltic Carrier, Marshall Islands (2350 tons in the Baltic Sea) 2002 Prestige oil spill, Spain (17.8 million gallons in Atlantic Ocean) 2003 Fu Shan Hai, China (1,680 tons in the Baltic Sea) 2004 Selendang Ayu, Malaysia (336,000 gallons in Unalaska Island - near Arctic) 2009 Full City, Panama (6,300-9,500 gallons in Langesund) 2011 Godafoss, Malaysia (200,000 gallons in Hvaler Islands) 2011 Golden Traded (205 tons in Skagerrak) Environmental impacts of heavy fuel oil use The combustion of HFO in ship engines results in the highest amount of black carbon emissions compared to all other fuels. The choice of marine fuel is the most important determinant of ship engine emission factors for black carbon. The second most important factor in the emission of black carbon is the ship load size, with emission factors of black carbon increasing up to six times given low engine loads. Black carbon is the product of incomplete combustion and a component of soot and fine particulate matter (<2.5 μg). It has a short atmospheric lifetime of a few days to a week and is typically removed upon precipitation events. Although there has been debate concerning the radiative forcing of black carbon, combinations of ground and satellite observations suggest a global solar absorption of 0.9W·m−2, making it the second most important climate forcer after CO2. Black carbon affects the climate system by: decreasing the snow/ice albedo through dark soot deposits and increasing snowmelt timing, reducing the planetary albedo through absorption of solar radiation reflected by the cloud systems, earth surface and atmosphere, as well as directly decreasing cloud albedo with black carbon contamination of water and ice found therein. The greatest increase in Arctic surface temperature per unit of black carbon emissions results from the decrease in snow/ice albedo which makes Arctic specific black carbon release more detrimental than emissions elsewhere. IMO and the Polar Code The International Maritime Organization (IMO), a specialized arm of the United Nations, adopted into force on 1 January 2017 the International Code for Ships Operating in Polar Waters or Polar Code. The requirements of the Polar Code are mandatory under both the International Convention for the Prevention of Pollution from Ships (MARPOL) and the International Convention for the Safety of Life at Sea (SOLAS). The two broad categories covered by the Polar Code include safety and pollution prevention related to navigation in both Arctic and Antarctic polar waters. The carriage and use of HFO in the Arctic is discouraged by the Polar Code while being banned completely from the Antarctic under MARPOL Annex I regulation 43. The ban of HFO use and carriage in the Antarctic precedes the adoption of the Polar Code. At its 60th session (26 March 2010), The Marine Environmental Protection Committee (MEPC) adopted Resolution 189(60) which went into effect in 2011 and prohibits fuels of the following characteristics: crude oils having a density at 15°C higher than 900 kg/m3 ; oils, other than crude oils, having a density at 15°C higher than 900 kg/m3 or a kinematic viscosity at 50°C higher than 180 mm2/s; or bitumen, tar and their emulsions. IMO's Marine Environmental Protection Committee (MEPC) tasked the Pollution Prevention Response Sub-Committee (PPR) to enact a ban on the use and carriage of heavy fuel in Arctic waters at its 72nd and 73rd sessions. This task is also accompanied by a requirement to properly define HFO taking into account its current definition under MARPOL Annex I regulation 43. The adoption of the ban is anticipated for 2021, with widespread implementation by 2023. Resistance to heavy fuel oil phase-out The Clean Arctic Alliance was the first IMO delegate nonprofit organization to campaign against the use of HFO in Arctic waters. However, the phase-out and ban of HFO in the Arctic was formally proposed to MEPC by eight countries in 2018: Finland, Germany, Iceland, the Netherlands, New Zealand, Norway, Sweden and the United States. Although these member states continue to support the initiative, several countries have been vocal about their resistance to an HFO ban on such a short time scale. The Russian Federation has expressed concern for impacts to the maritime shipping industry and trade given the relatively low cost of HFO. Russia instead suggested the development and implementation of mitigation measures for the use and carriage of HFO in Arctic waters. Canada and Marshall Islands have presented similar arguments, highlighting the potential impacts on Arctic communities (namely remote indigenous populations) and economies. To appease concerns and resistance, at its 6th session in February 2019, the PPR sub-committee working group developed a "draft methodology for analyzing impacts" of HFO to be finalized at PPR's 7th session in 2020. The purpose of the methodology being to evaluate the ban according to its economic and social impacts on Arctic indigenous communities and other local communities, to measure anticipated benefits to local ecosystems, and potentially consider other factors that could be positively or negatively affected by the ban. References See also Mazut Oils Petroleum products IARC Group 2B carcinogens Liquid fuels
Heavy fuel oil
Chemistry
2,270
30,737,910
https://en.wikipedia.org/wiki/Saturated%20absorption%20spectroscopy
Saturated absorption spectroscopy measures the transition frequency of an atom or molecule between its ground state and an excited state. In saturated absorption spectroscopy, two counter-propagating, overlapped laser beams are sent through a sample of atomic gas. One of the beams stimulates photon emission in excited atoms or molecules when the laser's frequency matches the transition frequency. By changing the laser frequency until these extra photons appear, one can find the exact transition frequency. This method enables precise measurements at room temperature because it is insensitive to doppler broadening. Absorption spectroscopy measures the doppler-broadened transition, so the atoms must be cooled to millikelvin temperatures to achieve the same sensitivity as saturated absorption spectroscopy. Principle of saturated absorption spectroscopy To overcome the problem of Doppler broadening without cooling down the sample to millikelvin temperatures, a classical pump–probe scheme is used. A laser with a relatively high intensity is sent through the atomic vapor, known as the pump beam. Another counter-propagating weak beam is also sent through the atoms at the same frequency, known as the probe beam. The absorption of the probe beam is recorded on a photodiode for various frequencies of the beams. Although the two beams are at the same frequency, they address different atoms due to natural thermal motion. If the beams are red-detuned with respect to the atomic transition frequency, then the pump beam will be absorbed by atoms moving towards the beam source, while the probe beam will be absorbed by atoms moving away from that source at the same speed in the opposite direction. If the beams are blue-detuned, the opposite occurs. If, however, the laser is approximately on resonance, these two beams address the same atoms, those with velocity vectors nearly perpendicular to the direction of laser propagation. In the two-state approximation of an atomic transition, the strong pump beam will cause many of the atoms to be in the excited state; when the number of atoms in the ground state and the excited state are approximately equal, the transition is said to be saturated. When a photon from the probe beam passes through the atoms, there is a good chance that, if it encounters an atom, the atom will be in the excited state and will thus undergo stimulated emission, with the photon passing through the sample. Thus, as the laser frequency is swept across the resonance, a small dip in the absorption feature will be observed at each atomic transition (generally hyperfine resonances). The stronger the pump beam, the wider and deeper the dips in the Gaussian Doppler-broadened absorption feature become. Under perfect conditions, the width of the dip can approach the natural linewidth of the transition. A consequence of this method of counter-propagating beams on a system with more than two states is the presence of crossover lines. When two transitions are within a single Doppler-broadened feature and share a common ground state, a crossover peak at a frequency exactly between the two transitions can occur. This is the result of moving atoms seeing the pump and probe beams resonant with two separate transitions. The pump beam can cause the ground state to be depopulated, saturating one transition, while the probe beam finds much fewer atoms in the ground state because of this saturation, and its absorption falls. These crossover peaks can be quite strong, often stronger than the main saturated absorption peaks. Doppler broadening of the absorption spectrum of an atom According to the description of an atom interacting with the electromagnetic field, the absorption of light by the atom depends on the frequency of the incident photons. More precisely, the absorption is characterized by a Lorentzian of width Γ/2 (for reference, for common rubidium D-line transitions). If we have a cell of atomic vapour at room temperature, then the distribution of velocity will follow a Maxwell–Boltzmann distribution where is the number of atoms, is the Boltzmann constant, and is the mass of the atom. According to the Doppler effect formula in the case of non-relativistic speeds, where is the frequency of the atomic transition when the atom is at rest (the one which is being probed). The value of as a function of and can be inserted in the distribution of velocities. The distribution of absorption as a function of the pulsation will therefore be proportional to a Gaussian with full width at half maximum For a rubidium atom at room temperature, Therefore, without any special trick in the experimental setup probing the maximum of absorption of an atomic vapour, the uncertainty of the measurement will be limited by the Doppler broadening and not by the fundamental width of the resonance. Experimental realization As the pump and the probe beam must have the same exact frequency, the most convenient solution is for them to come from the same laser. The probe beam can be made of a reflection of the pump beam passed through neutral density filter to reduce its intensity. To fine-tune the frequency of the laser, a diode laser with a piezoelectric transducer that controls the cavity wavelength can be used. Due to photodiode noise, the laser frequency can be swept across the transition and the photodiode reading averaged over many sweeps. In real atoms, there are sometimes more than two relevant transitions within the sample's Doppler profile (e.g. in alkali atoms with hyperfine interactions). This will generate the apparition of other dips in the absorption feature due to these new resonances in addition to crossover resonances. References Saturated Absorption Spectroscopy of Rubidium Atomic physics Spectroscopy
Saturated absorption spectroscopy
Physics,Chemistry
1,150
73,345,230
https://en.wikipedia.org/wiki/Edmund%20R.%20Malinowski
Edmund R. Malinowski (October 1932 – February 2020) was an American professor of chemistry and is considered to be one of the great pioneers of the field of chemometrics. He published over 70 research papers and his 1980 book, Factor Analysis in Chemistry is acknowledged to be the first text on factor analysis applied to chemistry.  Malinowski is credited with having an “enormous impact” on the field of chemometrics and the researchers who followed him. Education Edmund R. Malinowski was born in October 1932, in Mahanoy City, PA. He obtained a Bachelor of Science in chemistry from The Pennsylvania State University in 1954. He obtained both a MS degree (1956) and PhD (1961) in physical chemistry at Stevens Institute of Technology, and was a Robert Crooks Stanley Graduate Fellow as a student during those years. Career Malinowski went on to a long, distinguished academic career at the Stevens Institute of Technology. He joined the Institute’s chemistry faculty in 1965 and became a full professor there in 1970. The Institute awarded him the 1977 Jess H. Research Award for his work on the theory and applications of factor analysis, and the 1994 Henry Morton Distinguished Professor Award, which honors excellence in research and teaching. He retired from Stevens Institute of Technology in 1997. Paul Gemperline summarized Malinowski's work at the time of his retirement in a Journal of Chemometrics editorial: "Over the course of his career as a chemist and educator he has published over 70 research papers and presented over 200 papers at seminars and professional meetings." Research Before chemometrics was identified and named as a field of research and study by Kowalski and Wold in 1974, Malinowski was publishing work that would be regarded as chemometrics. His first paper on factor analysis was published in 1966. Influence and recognition According to Paul Gemperline, prior to Malinowski's 1980 book, Factor Analysis in Chemistry, "factor analysis and principal component analysis were virtually unknown in the world of chemistry." Gemperline goes on to say Malinowski's book was "influential" not only because of the "timely" introduction of these topics but also because he had a "clear writing style … that chemists could understand". Malinowski was recognized for being generous in his acknowledgement and recognition of the work of others, citing their work in his own presentations and publications, which helped younger researchers to be successful. Malinowski was recognized by his peers as a leader in the field of chemometrics in 1998 when he was awarded the Galactic Industries Award for Achievements in Chemometrics. After his 1997 retirement the Journal of Chemometrics published a special issue in Malinowski’s honor. List of Honors and Awards 1957 – 1959: Robert Crooks Stanley Graduate Fellowship, Stevens Institute of Technology 1977: Jess H. Research Award, Stevens Institute of Technology, 1994: Henry Morton Distinguished Professor Award, Stevens Institute of Technology 1997: Special Issue: In Honor of Professor Malinowski's Retirement, Journal of Chemometrics, 1998: Galactic Industries Award for Achievements in Chemometrics References 1932 births 2020 deaths American chemists Chemometricians Pennsylvania State University alumni Stevens Institute of Technology alumni Stevens Institute of Technology faculty People from Mahanoy City, Pennsylvania
Edmund R. Malinowski
Chemistry
672
76,543,945
https://en.wikipedia.org/wiki/Ruby%20Payne-Scott%20Medal%20and%20Lecture
The Ruby Payne-Scott Medal and Lecture for women in science is a distinguished career award that acknowledges outstanding Australian women researchers in the biological sciences or physical science. It is conferred by the Australian Academy of Science and is awarded to researchers who are usually resident in, and conduct their research predominantly in Australia. This award, established in 2021, honours the contributions of Ruby Payne-Scott, particularly in the fields of radiophysics and radio astronomy. Recipients References Australian science and technology awards Science awards honoring women Awards established in 2021 Australian Academy of Science Awards
Ruby Payne-Scott Medal and Lecture
Technology
108
27,666,374
https://en.wikipedia.org/wiki/ISO%2010012
ISO 10012:2003, Measurement management systems - Requirements for measurement processes and measuring equipment is the International Organization for Standardization (ISO) standard that specifies generic requirements and provides guidance for the management of measurement processes and metrological confirmation of measuring equipment used to support and demonstrate compliance with metrological requirements. It specifies quality management requirements of a measurement management system that can be used by an organization performing measurements as part of the overall management system, and to ensure metrological requirements are met. ISO 10012:2003 is not intended to be used as a requisite for demonstrating conformance with ISO 9001, ISO 14001 or any other standard. Interested parties can agree to use ISO 10012:2003 as an input for satisfying measurement management system requirements in certification activities. Other standards and guides exist for particular elements affecting measurement results, e.g. details of measurement methods, competence of personnel, and interlaboratory comparisons. ISO 10012:2003 is not intended as a substitute for, or as an addition to, the requirements of ISO/IEC 17025. Revisions ISO 10012-1:1992 ISO 10012-2:1997 ISO 10012-1:1992 Quality assurance requirements for measuring equipment – Part 1: Metrological confirmation system for measuring equipment Applies to: testing laboratories, including those providing a calibration service; suppliers of products or services; other organizations where measurement is used to demonstrate compliance with specified requirements. ISO 10012-2:1997 Quality assurance for measuring equipment – Part 2: Guidelines for control of measurement processes References ISO Catalogue in the ISO website ISO 10012:2003(en) Measurement management systems in the ISO online browsing platform 10012
ISO 10012
Technology
334
70,580,022
https://en.wikipedia.org/wiki/Crocco%27s%20Multiplanetary%20Trajectory
Crocco's Multiplanetary Trajectory, sometimes named Crocco's Mission and Crocco's "Grand Tour", is a mathematical description of an hypothetical Earth-Mars-Venus-Earth-Research Mission, which was first proposed in 1956 by the Aeronautics and Space Pioneer G. A. Crocco during the VII. International Astronautical Congress in Rome. History With the beginning of spaceflight in the first half of the 20th century, theoretical mathematicians estimated the amount of energy required for interplanetary spaceflight for the first time, the energy requirement being significantly higher than that of any chemical fuel available at that time. It remained questionable if humanity would ever be capable of reaching certain locations in the Solar System. Even though Walter Hohmann had already calculated the most energy-efficient trajectory between two similar circular orbits (the Hohmann transfer orbit) at the beginning of the century, an Earth-Mars-Earth round trip utilizing this flight path would have made it a necessity to remain on the surface of Mars for a duration of 425 days, waiting for the planets to align and the next launch window to open, in addition to 259 days for both the journey to Mars and the return to Earth. For this reason, Gaetano Crocco developed the concept of a Nonstop-Round-Trip to Mars, which would have had a far lower energy requirement, as the rocket engines of the craft would only have to accelerate it in a single maneuver to attain the necessary velocity to reach Mars. During the passing of the spacecraft, onboard crew would have performed analysis of the martian surface. Even this trajectory would have amounted to a flight time of a little over one year, if the spacecraft were not accelerated in another orbital maneuver. Properties Crocco searched for an orbit with the following properties: Crosses the orbit of the Earth Crosses the orbit of Mars Orbital period of one year With proper selection of a certain launch window, this would also have allowed the spacecraft to return one year after departure. By applying slight modifications to the trajectory it would have been possible to pass by Venus in the same mission as well. Crocco realized that the flight trajectory may be disrupted by the gravitational fields of Mars and Venus, delaying or ultimately preventing return to Earth. He solved this problem by directing the flight path through the gravitational fields of Venus and Mars in such a way that their attraction would cancel each other out. The Mission Profile presented in 1956 consisted of a 154-day travel from Earth to Mars, followed by a 113-day travel to Venus including passing by the planet and using the gravitational attraction for a course correction, followed by the 98-day return to Earth. Crocco proposed the first Mission to be launched in June 1971. Difference between Crocco's Mission and Gravity Assists Crocco was well aware of the planet's gravitational effects, but his mission profile is neither using them to accelerate nor decelerate, instead limiting itself to utilizing them for trajectory stabilizations. Still, it is mistakenly widely assumed that G. A. Crocco would have been the inventor of the gravity assist, which was first presented in 1961 by the American mathematician Michael Minovitch as a method to allow for the exploration of the outer planets which had been deemed nearly impossible beforehand. Pioneer 10, 11 and the Voyager probes would make use of this technique in the 1970s. Literature Michael A. Minovitch A method for determining interplanetary free-fall reconnaissance trajectories Jet Propulsion Laboratory, 23. August 1961 ( Archived 20 März 2019) Gaetano A. Crocco One-Year Exploration-Trip Earth-Mars-Venus-Earth 7th Congress of the International Astronautical Federation, Rome, September 1956 ( Archived 15 March 2016) Richard L. Dowling, Kosmann, William J.; Minovitch, Michael A.; Ridenoure, Rex W: The origin of gravity-propelled interplanetary space travel 41st Congress of the International Astronautical Federation, Dresden, 6-12 Oktober 1990( Archived 17 April 2021) References Astrophysics Human missions to Mars Human missions to Venus Missions to Mars Missions to Venus
Crocco's Multiplanetary Trajectory
Physics,Astronomy
836
47,833,273
https://en.wikipedia.org/wiki/Nova%20Scotia%20Association%20of%20Architects
The Nova Scotia Association of Architects (NSAA) is a professional association that regulates the practice of architecture in Nova Scotia, Canada. It was founded in 1932 and is empowered by the provincial Architects Act. The organisation is headquartered on Barrington Street in Halifax. The NSAA administers the Lieutenant Governor Design Awards in Architecture, the premier architectural awards in the province. References External links Architecture associations based in Canada Professional associations based in Nova Scotia Halifax, Nova Scotia
Nova Scotia Association of Architects
Engineering
91
18,399,589
https://en.wikipedia.org/wiki/Ehlers%E2%80%93Geren%E2%80%93Sachs%20theorem
The Ehlers–Geren–Sachs theorem, published in 1968 by Jürgen Ehlers, P. Geren and Rainer K. Sachs, shows that if, in a given universe, all freely falling observers measure the cosmic background radiation to have exactly the same properties in all directions (that is, they measure the background radiation to be isotropic), then that universe is an isotropic and homogeneous FLRW spacetime, if the one uses a kinetic picture and the collision term vanishes, i.e. in the so-called Vlasov case or if there is a so-called detailed balance. This result was later extended to the full Boltzmann case by R. Treciokas and G.F.R. Ellis. Using the fact that, as measured from Earth, the cosmic microwave background is indeed highly isotropic—the temperature characterizing this thermal radiation varies only by tenth of thousandth of a kelvin with the direction of observations—and making the Copernican assumption that Earth does not occupy a privileged cosmic position, this constitutes the strongest available evidence for our own universe's homogeneity and isotropy, and hence for the foundation of current standard cosmological models. Strictly speaking, this conclusion has a potential flaw. While the Ehlers–Geren–Sachs theorem concerns only exactly isotropic measurements, it is known that the background radiation does have minute irregularities. This was addressed by a generalization published in 1995 by W. R. Stoeger, Roy Maartens and George Ellis, which shows that an analogous result holds for observers who measure a nearly isotropic background radiation, and can justly infer to live in a nearly FLRW universe. However the paper by Stoeger et al. assumes that derivatives of the cosmic background temperature multipoles are bounded in terms of the multipoles themselves. The derivatives of the multipoles are not directly accessible to us and would require observations over time and space intervals on cosmological scales. In 1999 John Wainwright, M. J. Hancock and Claes Uggla show a counterexample in the non-tilted perfect fluid case. Thus an almost isotropic cosmic microwave temperature does not imply an almost isotropic universe. Using the methods of Wainwright et al. Ho Lee, Ernesto Nungesser and John Stalker could show that they can be applied to Vlasov as well, which was the original matter model of the EGS-theorem. References Coordinate charts in general relativity Cosmic background radiation
Ehlers–Geren–Sachs theorem
Physics,Mathematics
517
196,206
https://en.wikipedia.org/wiki/Longevity
Longevity may refer to especially long-lived members of a population, whereas life expectancy is defined statistically as the average number of years remaining at a given age. For example, a population's life expectancy at birth is the same as the average age at death for all people born in the same year (in the case of cohorts). Longevity studies may involve putative methods to extend life. Longevity has been a topic not only for the scientific community but also for writers of travel, science fiction, and utopian novels. The legendary fountain of youth appeared in the work of the Ancient Greek historian Herodotus. There are difficulties in authenticating the longest human life span, owing to inaccurate or incomplete birth statistics. Fiction, legend, and folklore have proposed or claimed life spans in the past or future vastly longer than those verified by modern standards, and longevity narratives and unverified longevity claims frequently speak of their existence in the present. A life annuity is a form of longevity insurance. Life expectancy, as of 2010 Various factors contribute to an individual's longevity. Significant factors in life expectancy include gender, genetics, access to health care, hygiene, diet and nutrition, exercise, lifestyle, and crime rates. Below is a list of life expectancies in different types of countries: Developed countries: 77–90 years (e.g. Canada: 81.29 years, 2010 est.) Developing countries: 32–80 years (e.g. Mozambique: 41.37 years, 2010 est.) Population longevities are increasing as life expectancies around the world grow: Australia: 80 years in 2002, 81.72 years in 2010 France: 79.05 years in 2002, 81.09 years in 2010 Germany: 77.78 years in 2002, 79.41 years in 2010 Italy: 79.25 years in 2002, 80.33 years in 2010 Japan: 81.56 years in 2002, 82.84 years in 2010 Monaco: 79.12 years in 2002, 79.73 years in 2011 Spain: 79.06 years in 2002, 81.07 years in 2010 United Kingdom: 80 years in 2002, 81.73 years in 2010 United States: 77.4 years in 2002, 78.24 years in 2010 Long-lived individuals The Gerontology Research Group validates current longevity records by modern standards, and maintains a list of supercentenarians; many other unvalidated longevity claims exist. Record-holding individuals include: Eilif Philipsen (21 July 1682 – 20 June 1785, 102 years, 333 days): first person to reach the age of 100 (on 21 July 1782) and whose age could be validated. Geert Adriaans Boomgaard (1788–1899, 110 years, 135 days): first person to reach the age of 110 (on September 21, 1898) and whose age could be validated. Margaret Ann Neve, (18 May 1792 – 4 April 1903, 110 years, 346 days) the first validated female supercentenarian (on 18 May 1902). Jeanne Calment (1875–1997, 122 years, 164 days): the oldest person in history whose age has been verified by modern documentation. This defines the modern human life span, which is set by the oldest documented individual who ever lived. Sarah Knauss (1880–1999, 119 years, 97 days): the third oldest documented person in modern times and the oldest American. Jiroemon Kimura (1897–2013, 116 years, 54 days): the oldest man in history whose age has been verified by modern documentation. Kane Tanaka (1903–2022, 119 years, 107 days): the second oldest documented person in modern times and the oldest Japanese. Major factors Evidence-based studies indicate that longevity is based on two major factors: genetics and lifestyle. Genetics Twin studies have estimated that approximately 20-30% of the variation in human lifespan can be related to genetics, with the rest due to individual behaviors and environmental factors which can be modified. Although over 200 gene variants have been associated with longevity according to a US-Belgian-UK research database of human genetic variants these explain only a small fraction of the heritability. Lymphoblastoid cell lines established from blood samples of centenarians have significantly higher activity of the DNA repair protein PARP (Poly ADP ribose polymerase) than cell lines from younger (20 to 70 year old) individuals. The lymphocytic cells of centenarians have characteristics typical of cells from young people, both in their capability of priming the mechanism of repair after sublethal oxidative DNA damage and in their PARP gene expression. These findings suggest that elevated PARP gene expression contributes to the longevity of centenarians, consistent with the DNA damage theory of aging. In July 2020, scientists used public biological data on 1.75 m people with known lifespans overall and identified 10 genomic loci which appear to intrinsically influence healthspan, lifespan, and longevity – of which half have not been reported previously at genome-wide significance and most being associated with cardiovascular disease – and identified haem metabolism as a promising candidate for further research within the field. Their study suggests that high levels of iron in the blood likely reduce, and genes involved in metabolising iron likely increase healthy years of life in humans. Lifestyle Longevity is a highly plastic trait, and traits that influence its components respond to physical (static) environments and to wide-ranging life-style changes: physical exercise, dietary habits, living conditions, and pharmaceutical as well as nutritional interventions. A 2012 study found that even modest amounts of leisure time physical exercise can extend life expectancy by as much as 4.5 years. Diet As of 2021, there is no clinical evidence that any dietary practice contributes to human longevity. Although health can be influenced by diet, including the type of foods consumed, the amount of calories ingested, and the duration and frequency of fasting periods, there is no good clinical evidence that fasting promotes longevity in humans, . Calorie restriction is a widely researched intervention to assess effects on aging, defined as a sustained reduction in dietary energy intake compared to the energy required for weight maintenance. To ensure metabolic homeostasis, the diet during calorie restriction must provide sufficient energy, micronutrients, and fiber. Some studies on rhesus monkeys showed that restricting calorie intake resulted in lifespan extension, while other animals studies did not detect a significant change. According to preliminary research in humans, there is little evidence that calorie restriction affects lifespan. There is a link between diet and obesity and consequent obesity-associated morbidity. Biological pathways Four well-studied biological pathways that are known to regulate aging, and whose modulation has been shown to influence longevity are Insulin/IGF-1, mechanistic target of rapamycin (mTOR), AMP-activating protein kinase (AMPK), and Sirtuin pathways. Autophagy Autophagy plays a pivotal role in healthspan and lifespan extension. Change over time In preindustrial times, deaths at young and middle age were more common than they are today. This is not due to genetics, but because of environmental factors such as disease, accidents, and malnutrition, especially since the former were not generally treatable with pre-20th-century medicine. Deaths from childbirth were common for women, and many children did not live past infancy. In addition, most people who did attain old age were likely to die quickly from the above-mentioned untreatable health problems. Despite this, there are several examples of pre-20th-century individuals attaining lifespans of 85 years or greater, including John Adams, Cato the Elder, Thomas Hobbes, Christopher Polhem, and Michelangelo. This was also true for poorer people like peasants or laborers. Genealogists will almost certainly find ancestors living to their 70s, 80s and even 90s several hundred years ago. For example, an 1871 census in the UK (the first of its kind, but personal data from other censuses dates back to 1841 and numerical data back to 1801) found the average male life expectancy as being 44, but if infant mortality is subtracted, males who lived to adulthood averaged 75 years. The present life expectancy in the UK is 77 years for males and 81 for females, while the United States averages 74 for males and 80 for females. Studies have shown that black American males have the shortest lifespans of any group of people in the US, averaging only 69 years (Asian-American females average the longest). This reflects overall poorer health and greater prevalence of heart disease, obesity, diabetes, and cancer among black American men. Women normally outlive men. Theories for this include smaller bodies that place lesser strain on the heart (women have lower rates of cardiovascular disease) and a reduced tendency to engage in physically dangerous activities. Conversely, women are more likely to participate in health-promoting activities. The X chromosome also contains more genes related to the immune system, and women tend to mount a stronger immune response to pathogens than men. However, the idea that men have weaker immune systems due to the supposed immuno-suppressive actions of testosterone is unfounded. There is debate as to whether the pursuit of longevity is a worthwhile health care goal. Bioethicist Ezekiel Emanuel, who is also one of the architects of ObamaCare, has argued that the pursuit of longevity via the compression of morbidity explanation is a "fantasy" and that longevity past age 75 should not be considered an end in itself. This has been challenged by neurosurgeon Miguel Faria, who states that life can be worthwhile in healthy old age, that the compression of morbidity is a real phenomenon, and that longevity should be pursued in association with quality of life. Faria has discussed how longevity in association with leading healthy lifestyles can lead to the postponement of senescence as well as happiness and wisdom in old age. Naturally limited longevity Most biological organisms have a naturally limited longevity due to aging, unlike a rare few that are considered biologically immortal. Given that different species of animals and plants have different potentials for longevity, the disrepair accumulation theory of aging tries to explain how the potential for longevity of an organism is sometimes positively correlated to its structural complexity. It suggests that while biological complexity increases individual lifespan, it is counteracted in nature since the survivability of the overall species may be hindered when it results in a prolonged development process, which is an evolutionarily vulnerable state. According to the antagonistic pleiotropy hypothesis, one of the reasons biological immortality is so rare is that certain categories of gene expression that are beneficial in youth become deleterious at an older age. Myths and claims Longevity myths are traditions about long-lived people (generally supercentenarians), either as individuals or groups of people, and practices that have been believed to confer longevity, but for which scientific evidence does not support the ages claimed or the reasons for the claims. A comparison and contrast of "longevity in antiquity" (such as the Sumerian King List, the genealogies of Genesis, and the Persian Shahnameh) with "longevity in historical times" (common-era cases through twentieth-century news reports) is elaborated in detail in Lucian Boia's 2004 book Forever Young: A Cultural History of Longevity from Antiquity to the Present and other sources. After the death of Juan Ponce de León, Gonzalo Fernández de Oviedo y Valdés wrote in Historia General y Natural de las Indias (1535) that Ponce de León was looking for the waters of Bimini to cure his aging. Traditions that have been believed to confer greater human longevity also include alchemy, such as that attributed to Nicolas Flamel. In the modern era, the Okinawa diet has some reputation of linkage to exceptionally high ages. Longevity claims may be subcategorized into four groups: "In late life, very old people often tend to advance their ages at the rate of about 17 years per decade .... Several celebrated super-centenarians (over 110 years) are believed to have been double lives (father and son, relations with the same names or successive bearers of a title) .... A number of instances have been commercially sponsored, while a fourth category of recent claims are those made for political ends ...." The estimate of 17 years per decade was corroborated by the 1901 and 1911 British censuses. Time magazine considered that, by the Soviet Union, longevity had been elevated to a state-supported "Methuselah cult". Robert Ripley regularly reported supercentenarian claims in Ripley's Believe It or Not!, usually citing his own reputation as a fact-checker to claim reliability. Non-human biological longevity Longevity in other animals can shed light on the determinants of life expectancy in humans, especially when found in related mammals. However, important contributions to longevity research have been made by research in other species, ranging from yeast to flies to worms. In fact, some closely related species of vertebrates can have dramatically different life expectancies, demonstrating that relatively small genetic changes can have a dramatic impact on aging. For instance, Pacific Ocean rockfishes have widely varying lifespans. The species Sebastes minor lives a mere 11 years while its cousin Sebastes aleutianus can live for more than 2 centuries. Similarly, a chameleon, Furcifer labordi, is the current record holder for shortest lifespan among tetrapods, with only 4–5 months to live. By contrast, some of its relatives, such as Furcifer pardalis, have been found to live up to 6 years. There are studies about aging-related characteristics of and aging in long-lived animals like various turtles and plants like Ginkgo biloba trees. They have identified potentially causal protective traits and suggest many of the species have "slow or [times of] negligible senescence" (or aging). The jellyfish T. dohrnii is biologically immortal and has been studied by comparative genomics. Honey bees (Apis mellifera) are eusocial insects that display dramatic caste-specific differences in longevity. Queen bees live for an average of 1-2 years, compared to workers who live on average 15-38 days in summer and 150-200 days in winter. Worker honey bees with high amounts of flight experience exhibit increased DNA damage in flight muscle, as measured by elevated 8-Oxo-2'-deoxyguanosine, compared to bees with less flight experience. This increased DNA damage is likely due to an imbalance of pro- and anti-oxidants during flight-associated oxidative stress. Flight induced oxidative DNA damage appears to hasten senescence and reduce longevity in A. mellifera. Examples of long-lived plants and animals Currently living Methuselah: over 4,850-year-old bristlecone pine in the White Mountains of California, the oldest currently living non-clonal tree. Dead WPN-114, "Prometheus": approximately 4,900 year-old (at time of tree-death) Pinus longaeva, located in Wheeler Peak, Nevada. The quahog clam (Arctica islandica) is exceptionally long-lived, with a maximum recorded age of 507 years, the longest of any animal. Other clams of the species have been recorded as living up to 374 years. Lamellibrachia luymesi, a deep-sea cold-seep tubeworm, is estimated to reach ages of over 250 years based on a model of its growth rates. A bowhead whale killed in a hunt was found to be approximately 211 years old (possibly up to 245 years old), the longest-lived mammal known. Possibly 250-million year-old bacteria, Bacillus permians, were revived from stasis after being found in sodium chloride crystals in a cavern in New Mexico. Artificial animal longevity extension Gene editing via CRISPR-Cas9 and other methods have significantly altered lifespans in animals. See also Actuarial science Aging Blue zone Centenarian Genetics of aging Life extension Longevity claims Longevity myths Longevity quotient Maximum life span Senescence Notes References Citations Sources External links Global Agewatch's country report cards have the most up-to-date, internationally comparable statistics on population ageing and life expectancy from 195 countries. Duration Population Senescence Health promotion Gerontology
Longevity
Physics,Chemistry,Biology
3,401
71,308,572
https://en.wikipedia.org/wiki/Britzelmayria%20multipedata
Britzelmayria multipedata is a species of mushroom producing fungus in the family Psathyrellaceae. It is commonly known as the clustered brittlestem. Taxonomy It was first described in 1905 by the American mycologist Charles Horton Peck who classified it as Psathyra multipedata. It was reclassified as Psathyrella multipedata in 1941 by the American mycologistAlexander H. Smith and remained known as such until recently. In 2020 the German mycologists Dieter Wächter & Andreas Melzer reclassified many species in the Psathyrellaceae family based on phylogenetic analysis and placed this species in the newly created genus Britzelmayria. Many mushroom field guides and websites still refer to this species as Psathyrella multipedata. Description Britzelmayria multipedata is a small brittlestem mushroom with white flesh and a brown cap which is known for growing in dense clusters. Cap: 1-3cm. Starts conical before flattening into a convex cap which may become campanulate or bell shaped with age. The smooth, brown cap becomes paler when dry. Gills: Adnate or adnexed. Crowded. Light grey or brown with white fringes maturing to dark brown. Stem: 7-12cm in height with a thickness of 3-6mm tapering slightly towards the cap. It often grows in a wavy fashion with the base fused together with other members of the cluster. Spore print: Dark purplish brown. Spores: Ellipsoid and smooth with a germ pore. 6.5-10 x 3.5-4 μm. Taste: Indistinct and mild. Smell: Faint and mushroomy. Habitat and distribution Britzelmayria multipedata is found on soil amongst grass and in open grassy spaces amongst woodland. It is saprotrophic and grows on buried fallen trees through the late Summer to Autumn. This species is widespread and found occasionally. Observations of this species appear most common in the UK, West Europe and the East Coast of the United States. References multipedata Fungus species
Britzelmayria multipedata
Biology
430
13,143
https://en.wikipedia.org/wiki/Generalized%20mean
In mathematics, generalized means (or power mean or Hölder mean from Otto Hölder) are a family of functions for aggregating sets of numbers. These include as special cases the Pythagorean means (arithmetic, geometric, and harmonic means). Definition If is a non-zero real number, and are positive real numbers, then the generalized mean or power mean with exponent of these positive real numbers is (See -norm). For we set it equal to the geometric mean (which is the limit of means with exponents approaching zero, as proved below): Furthermore, for a sequence of positive weights we define the weighted power mean as and when , it is equal to the weighted geometric mean: The unweighted means correspond to setting all . Special cases A few particular values of yield special cases with their own names: minimum harmonic mean geometric mean arithmetic mean root mean squareor quadratic mean cubic mean maximum Properties Let be a sequence of positive real numbers, then the following properties hold: . , where is a permutation operator. . . Generalized mean inequality In general, if , then and the two means are equal if and only if . The inequality is true for real values of and , as well as positive and negative infinity values. It follows from the fact that, for all real , which can be proved using Jensen's inequality. In particular, for in , the generalized mean inequality implies the Pythagorean means inequality as well as the inequality of arithmetic and geometric means. Proof of the weighted inequality We will prove the weighted power mean inequality. For the purpose of the proof we will assume the following without loss of generality: The proof for unweighted power means can be easily obtained by substituting . Equivalence of inequalities between means of opposite signs Suppose an average between power means with exponents and holds: applying this, then: We raise both sides to the power of −1 (strictly decreasing function in positive reals): We get the inequality for means with exponents and , and we can use the same reasoning backwards, thus proving the inequalities to be equivalent, which will be used in some of the later proofs. Geometric mean For any and non-negative weights summing to 1, the following inequality holds: The proof follows from Jensen's inequality, making use of the fact the logarithm is concave: By applying the exponential function to both sides and observing that as a strictly increasing function it preserves the sign of the inequality, we get Taking -th powers of the yields Thus, we are done for the inequality with positive ; the case for negatives is identical but for the swapped signs in the last step: Of course, taking each side to the power of a negative number swaps the direction of the inequality. Inequality between any two power means We are to prove that for any the following inequality holds: if is negative, and is positive, the inequality is equivalent to the one proved above: The proof for positive and is as follows: Define the following function: . is a power function, so it does have a second derivative: which is strictly positive within the domain of , since , so we know is convex. Using this, and the Jensen's inequality we get: after raising both side to the power of (an increasing function, since is positive) we get the inequality which was to be proven: Using the previously shown equivalence we can prove the inequality for negative and by replacing them with and , respectively. Generalized f-mean The power mean could be generalized further to the generalized -mean: This covers the geometric mean without using a limit with . The power mean is obtained for . Properties of these means are studied in de Carvalho (2016). Applications Signal processing A power mean serves a non-linear moving average which is shifted towards small signal values for small and emphasizes big signal values for big . Given an efficient implementation of a moving arithmetic mean called smooth one can implement a moving power mean according to the following Haskell code. powerSmooth :: Floating a => ([a] -> [a]) -> a -> [a] -> [a] powerSmooth smooth p = map (** recip p) . smooth . map (**p) For big it can serve as an envelope detector on a rectified signal. For small it can serve as a baseline detector on a mass spectrum. See also Arithmetic–geometric mean Average Heronian mean Inequality of arithmetic and geometric means Lehmer mean – also a mean related to powers Minkowski distance Quasi-arithmetic mean – another name for the generalized f-mean mentioned above Root mean square Notes References Further reading External links Power mean at MathWorld Examples of Generalized Mean A proof of the Generalized Mean on PlanetMath Means Inequalities Articles with example Haskell code
Generalized mean
Physics,Mathematics
986
63,384,362
https://en.wikipedia.org/wiki/Postal%20codes%20in%20Oceania
Postcodes used in Oceania vary between the various sovereign nations, territories, and associated states in the region. Many of the smaller island regions in Oceania use postal code systems that are integrated into the postal systems of larger countries they are territories or associates of. American states, territories, and states of association In addition to the U.S. State of Hawai'i, there are two territories, one commonwealth, and three freely associated states within Oceania that are administered by the United States Postal Service (U.S.P.S.). All of these places use zip codes that start with the prefixes 967, 968, or 969. Standard USPS domestic rates apply to mail between the United States and these places. Within the State of Hawai'i (postal abbreviation HI), zip code prefix 968 is generally reserved for Urban Honolulu, with all other areas prefixed 967 (shared with American Samoa). Within the U.S. Territories, American Samoa (postal abbreviation AS) uses zip code 96799, and Guam (postal abbreviation GU) uses zip codes in the range 96910–96932. Each major island of the Commonwealth of the Northern Mariana Islands (postal abbreviation MP) has its own zip code in the 96950-96952 range. The nominally independent countries governed by the Compact of Free Association with the United States are also fully integrated into the U.S. Postal Service: The Federated States of Micronesia (postal abbreviation FM), the Marshall Islands (postal abbreviation MH), and Palau (postal abbreviation PW). Palau's abbreviation is derived from the word "Pelew", an old spelling for the archipelago. Two ZIP codes are used within Palau, with 96939 applying to the entire city of Ngerulmud, and 96940 referring to all other parts of Palau. Australia Postcodes were introduced in Australia in 1967 by the Postmaster-General's Department and are now managed by Australia Post, and are published in booklets available from post offices or online from the Australia Post website. Postcodes in Australia have four digits and are placed at the end of the Australian address. French overseas territories There are three French Overseas Départements or Territories in Oceania that are integrated into the postal code system of France: French Polynesia, New Caledonia, and Wallis and Futuna. Like Overseas Départements and Territories around the world, the French postal service uses 3-digit codes to refer to these places: 987 for French Polynesia, 988 for New Caledonia, and 986 for Wallis and Futuna. New Zealand Pitcairn Islands The Pitcairn Islands is integrated into the postal code system of the United Kingdom. References Postal systems
Postal codes in Oceania
Technology
567
67,579,646
https://en.wikipedia.org/wiki/Time%20in%20Slovenia
In Slovenia, the standard time is Central European Time (; CET; UTC+01:00). Daylight saving time is observed from the last Sunday in March (02:00 CET) to the last Sunday in October (03:00 CEST). This is shared with several other EU member states. History The Austro-Hungarian Empire adopted CET on 1 October 1891. Slovenia would continue to observe CET after independence, and observed daylight saving time between 1941 and 1946, and again since 1983. Notation Slovenia uses both the 12-hour and 24-hour clock. IANA time zone database In the IANA time zone database, Slovenia is given one zone in the file zone.tab – Europe/Ljubljana. Data for Slovenia directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself: See also Time in Europe List of time zones by country List of time zones by UTC offset References External links Time in Slovenia at TimeAndDate.com. Geography of Slovenia
Time in Slovenia
Physics
211
56,956,044
https://en.wikipedia.org/wiki/Petrolex
Petrolex Oil & Gas Limited is a Nigerian company and part of Petrolex Group, an African integrated energy conglomerate. The company was founded in February 2007 by Segun Adebutu, a Nigerian entrepreneur. It provides services to the oil and gas industry. It is mainly involved in the refining, storage, distribution and retail of petroleum products in Nigeria and Africa. Petrolex is best known for starting in December 2017, the construction of a 3.6 billion dollar high capacity refinery and Sub-Saharan Africa’s largest tank farm as part of its Mega Oil City project in Ogun State, Nigeria. Background Petrolex CEO, Adebutu started an oil and fuel trading business around 2005 but showed interest in “mid-stream infrastructure” for $330 million. His experience in family business, laid the foundation for new ideas in his business career. Over the years, Adebutu was involved in bold projects including oil and gas, solid minerals, construction and maritime. This background inspired Adebutu to replicate similar practices with his new initiative Petrolex Oil & Gas Ltd. In December 2017, Petrolex announced its plan to build a $3.6 billion refinery plant with an output capacity of 250,000 barrels a day. The company is currently working on the “front-end engineering design” and will complete construction in 2021. This initiative is part of a larger Government program to end petroleum products imports in two years. With support from partners, Petrolex Group has invested over $330 million in the Ibefun tank farm with a 600,000 million litres monthly capacity. The farm was commissioned by the Vice President of Nigeria Yemi Osinbajo, as part of phase one of a 10-year expansion program. This phase would ease the Apapa and Ibafon tanker traffic gridlock, a source of anxiety for stakeholders. Petrolex Mega Oil City project Petrolex provides services in refining, storage, distribution and retailing of petroleum products. The company intends to be listed on the Nigerian Stock Exchange in the coming decade. The company launched the planning, design and development of the Petrolex Mega Oil City in Ibefun, Ogun State in 2012. The complex spreads over 101 square kilometres, about 10 per cent the size of Lagos State. It houses a residential estate for staff, an army barracks, 30 loading gantries for product disbursement, and a 4,000 truck capacity trailer park with accommodation for drivers. The Oil City project is the original idea of Segun Adebutu, CEO of Petrolex and son of the Nigerian entrepreneur Sir Kesington Adebutu. Its goal is to create the largest petrochemical industrial estate in Sub-Saharan Africa. Upon completion, this estate will include a large capacity refinery, a tank farm, a liquefied petroleum gas processing plant, a lubricant facility and raw material industries (ex. fertiliser plants). The company has also negotiated the addition of 12,000 acres to expand the Oil City. Operations overview Downstream operations Petrolex downstream operations include the processing of petroleum products, the supply and distribution of gas oil, kerosene; and the retail marketing of specific oil products. Petrolex has built a storage-tank farm and other “mid-stream infrastructure” for $330 million. The company is connecting its infrastructure to the Nigeria System 2B pipeline at Mosimi to support supply and distribution of petroleum products around the country. This infrastructure includes a procurement of barges, tug boats and a daughter vessel. References External links Petrolex official website Petroleum industry Oil and gas companies of Nigeria Companies based in Lagos 2007 establishments in Nigeria Energy companies established in 2007 Non-renewable resource companies established in 2007
Petrolex
Chemistry
749
404,839
https://en.wikipedia.org/wiki/Logical%20block%20addressing
Logical block addressing (LBA) is a common scheme used for specifying the location of blocks of data stored on computer storage devices, generally secondary storage systems such as hard disk drives. LBA is a particularly simple linear addressing scheme; blocks are located by an integer index, with the first block being LBA 0, the second LBA 1, and so on. The IDE standard included 22-bit LBA as an option, which was further extended to 28-bit with the release of ATA-1 (1994) and to 48-bit with the release of ATA-6 (2003), whereas the size of entries in on-disk and in-memory data structures holding the address is typically 32 or 64 bits. Most hard disk drives released after 1996 implement logical block addressing. Overview In logical block addressing, only one number is used to address data, and each linear base address describes a single block. The LBA scheme replaces earlier schemes which exposed the physical details of the storage device to the software of the operating system. Chief among these was the cylinder-head-sector (CHS) scheme, where blocks were addressed by means of a tuple which defined the cylinder, head, and sector at which they appeared on the hard disk. CHS did not map well to devices other than hard disks (such as tapes and networked storage), and was generally not used for them. CHS was used in early MFM and RLL drives, and both it and its successor, extended cylinder-head-sector (ECHS), were used in the first ATA drives. However, current disk drives use zone bit recording, where the number of sectors per track depends on the track number. Even though the disk drive will report some CHS values as sectors per track (SPT) and heads per cylinder (HPC), they have little to do with the disk drive's true geometry. LBA was first introduced in 1981 by SASI, the precursor of SCSI, as an abstraction. While the drive controller still addresses data blocks by their CHS address, this information is generally not used by the SCSI device driver, the OS, filesystem code, or any applications (such as databases) that access the "raw" disk. System calls requiring block-level I/O pass LBA definitions to the storage device driver; for simple cases (where one volume maps to one physical drive), this LBA is then passed directly to the drive controller. In redundant array of independent disks (RAID) devices and storage area networks (SANs) and where logical drives (logical unit numbers, LUNs) are composed via LUN virtualization and aggregation, LBA addressing of individual disk should be translated by a software layer to provide uniform LBA addressing for the entire storage device. Enhanced BIOS The earlier IDE standard from Western Digital introduced 22-bit LBA; in 1994, the ATA-1 standard allowed for 28 bit addresses in both LBA and CHS modes. The CHS scheme used 16 bits for cylinder, 4 bits for head and 8 bits for sector, counting sectors from 1 to 255. This means the reported number of heads never exceeds 16 (0–15), the number of sectors can be 255 (1–255; though 63 is often the largest used) and the number of cylinders can be as large as 65,536 (0–65535), limiting disk size to 128 GiB (≈137.4 GB), assuming 512 byte sectors. These values can be accessed by issuing the ATA command "Identify Device" (ECh) to the drive. However, the IBM BIOS implementation defined in the INT 13h disk access routines used quite a different 24-bit scheme for CHS addressing, with 10 bits for cylinder, 8 bits for head, and 6 bits for sector, or 1024 cylinders, 256 heads, and 63 sectors. This INT 13h implementation had pre-dated the ATA standard, as it was introduced when the IBM PC had only floppy disk storage, and when hard disk drives were introduced on the IBM PC/XT, INT 13h interface could not be practically redesigned due to backward compatibility issues. Overlapping ATA CHS mapping with BIOS CHS mapping produced the lowest common denominator of 10:4:6 bits, or 1024 cylinders, 16 heads, and 63 sectors, which gave the practical limit of 1024×16×63 sectors and 528MB (504 MiB), assuming 512 byte sectors. In order for the BIOS to overcome this limit and successfully work with larger hard drives, a CHS translation scheme had to be implemented in the BIOS disk I/O routines which would convert between 24-bit CHS used by INT 13h and 28-bit CHS numbering used by ATA. The translation scheme was called large or bit shift translation. This method would remap 16:4:8 bit ATA cylinders and heads to 10:8:6 bit scheme used by INT 13h, generating much more "virtual" drive heads than the physical disk reported. This increased the practical limit to 1024×256×63 sectors, or 8.4GB (7.8 GiB). To further overcome this limit, INT 13h Extensions were introduced with the BIOS Enhanced Disk Drive Services, which removed practical limits on disk size for operating systems which are aware of this new interface, such as the DOS 7.0 component in Windows 95. This enhanced BIOS subsystem supports LBA addressing with LBA or LBA-assisted method, which uses native 28-bit LBA for addressing ATA disks and performs CHS conversion as needed. The normal or none method reverts to the earlier 10:4:6 bit CHS mode which does not support addressing more than 528MB. Until the release of ATA-2 standard in 1996, there were a handful of large hard drives which did not support LBA addressing, so only large or normal methods could be used. However, using the large method also introduced portability problems, as different BIOSes often used different and incompatible translation methods, and hard drives partitioned on a computer with a BIOS from a particular vendor often could not be read on a computer with a different make of BIOS. The solution was to use conversion software such as OnTrack Disk Manager, Micro House EZ-Drive/EZ-BIOS, etc., which installed to the disk's OS loader and replaced INT 13h routines at boot time with custom code. This software could also enable LBA and INT 13h Extensions support for older computers with non LBA-compliant BIOSes. LBA-assisted translation When the BIOS is configured to use a disk in LBA-assisted translation mode, the BIOS accesses the hardware using LBA mode, but also presents a translated CHS geometry via the INT 13h interface. The number of cylinders, heads, and sectors in the translated geometry depends on the total size of the disk, as shown in the following table. LBA48 The current 48-bit LBA scheme was introduced in 2002 with the ATA-6 standard, raising the addressing limit to 2× 512 bytes, which is exactly 128PiB or approximately 144PB. Current PC-compatible computers support INT 13h Extensions, which use 64-bit structures for LBA addressing and should encompass any future extension of LBA addressing, though modern operating systems implement direct disk access and do not use the BIOS subsystems, except at boot load time. However, the common DOS style Master Boot Record (MBR) partition table only supports disk partitions up to 2TiB in size. For larger partitions this needs to be replaced by another scheme, for instance the GUID Partition Table (GPT) which has the same 64-bit limit as the current INT 13h Extensions. Windows XP SP2 is known to support LBA48 (and enabled by default). CHS conversion In the LBA addressing scheme, sectors are numbered as integer indexes; when mapped to CHS (cylinder-head-sector) tuples, LBA numbering starts with the first cylinder, first head, and track's first sector. Once the track is exhausted, numbering continues to the second head, while staying inside the first cylinder. Once all heads inside the first cylinder are exhausted, numbering continues from the second cylinder, etc. Thus, the lower the LBA value is, the closer the physical sector is to the hard drive's first (that is, outermost) cylinder. CHS tuples can be mapped to LBA address with the following formula: LBA = (C × HPC + H) × SPT + (S − 1) where C, H and S are the cylinder number, the head number, and the sector number LBA is the logical block address HPC is the maximum number of heads per cylinder (reported by disk drive, typically 16 for 28-bit LBA) SPT is the maximum number of sectors per track (reported by disk drive, typically 63 for 28-bit LBA) LBA addresses can be mapped to CHS tuples with the following formula ("mod" is the modulo operation, i.e. the remainder, and "÷" is integer division, i.e. the quotient of the division where any fractional part is discarded): C = LBA ÷ (HPC × SPT) H = (LBA ÷ SPT) mod HPC S = (LBA mod SPT) + 1 According to the ATA specifications, "If the content of words (61:60) is greater than or equal to 16,514,064, then the content of word 1 [the number of logical cylinders] shall be equal to 16,383." Therefore, for LBA 16450559, an ATA drive may actually respond with the CHS tuple (16319, 15, 63), and the number of cylinders in this scheme must be much larger than 1024 allowed by INT 13h. Operating system dependencies Operating systems that are sensitive to BIOS-reported drive geometry include Solaris, DOS and Windows NT family, where NTLDR (NT, 2000, XP, Server 2003) or BOOTMGR (Vista, Server 2008, Windows 7 and Server 2008 R2) use Master boot record which addresses the disk using CHS; x86-64 and Itanium versions of Windows can partition the drive with GUID Partition Table which uses LBA addressing. Some operating systems do not require any translation because they do not use geometry reported by BIOS in their boot loaders. Among these operating systems are BSD, Linux, macOS, OS/2 and ReactOS. See also Block (data storage) Cylinder-head-sector (CHS) Disk formatting Disk partitioning Disk storage Notes References External links LBAs explained LBA and CHS format, LBA mapping CHS to LBA Translation Tutorial Microsoft article on 7.8 GB limit on NT 4.0 Hard Drive Size Limitations and Barriers Upgrading and Repairing PC's, by Scott Mueller. Pages 524–531. AT Attachment 8 - ATA/ATAPI Command Set (ATA8-ACS) Computer storage devices SCSI AT Attachment BIOS
Logical block addressing
Technology
2,314
162,263
https://en.wikipedia.org/wiki/Telephone%20directory
A telephone directory, commonly called a telephone book, telephone address book, phonebook, or the white and yellow pages, is a listing of telephone subscribers in a geographical area or subscribers to services provided by the organization that publishes the directory. Its purpose is to allow the telephone number of a subscriber identified by name and address to be found. The advent of the Internet, search engines, and smartphones in the 21st century greatly reduced the need for a paper phone book. Some communities, such as Seattle and San Francisco, sought to ban their unsolicited distribution as wasteful, unwanted and harmful to the environment. The slogan "Let Your Fingers Do the Walking" refers to use of phone books. Content Subscriber names are generally listed in alphabetical order, together with their postal or street address and telephone number. In principle every subscriber in the geographical coverage area is listed, but subscribers may request the exclusion of their number from the directory, often for a fee; their number is then said to be "unlisted" (US and Canada), "ex-directory" (British English), or "private" (Australia and New Zealand). A telephone directory may also provide instructions: how to use the telephone service, how to dial a particular number, be it local or international, what numbers to access important and emergency services, utilities, hospitals, doctors, and organizations who can provide support in times of crisis. It may also have civil defense, emergency management, or first aid information. There may be transit maps, postal code/zip code guides, international dialing codes or stadium seating charts, as well as advertising. In the US, under current rules and practices, mobile phone and voice over IP listings are not included in telephone directories. Efforts to create cellular directories have met stiff opposition from several fronts, including those who seek to avoid telemarketers. Types A telephone directory and its content may be known by the colour of the paper it is printed on. White pages generally indicates personal or alphabetic listings. Yellow pages, golden pages, A2Z, or classified directory is usually a "business directory", where businesses are listed alphabetically within each of many classifications (e.g., "lawyers"), almost always with paid advertising. Grey pages, sometimes called a "reverse telephone directory", allowing subscriber details to be found for a given number. Not available in all jurisdictions. (These listings are often published separately, in a city directory, or under another name, for a price, and made available to commercial and government agencies.) Other colors may have other meanings; for example, information on government agencies is often printed on blue pages or green pages. Publication Telephone directories can be published in hard copy or in electronic form. In the latter case, the directory can be on physical media such as CD-ROM, or using an online service through proprietary terminals or over the Internet. In many countries, directories are both published in book form and also available over the Internet. Printed directories were usually supplied free of charge. CD ROM Selectphone (ProCD) Inc.) and PhoneDisc (Digital Directory Assistance Inc) were among the earliest such products. These were not a matter of a single click: PhoneDisc, depending on the mix of Residential, Business or both, involved up to eight CD-ROMs. SelectPhone is fewer CD-ROMs: five. Both provide a reverse lookup feature (by phone number or by address), albeit involving up to five CD-ROMs. Internet The combination of phone number lookups, along with Internet access, was offered by some service providers; VoIP (Voice over IP) was an additional feature. History Telephone directories are a type of city directory. Books listing the inhabitants of an entire city were widely published starting in the 18th century, before the invention of the telephone. The first telephone directory, consisting of a single piece of cardboard, was issued on 21 February 1878; it listed 50 individuals, businesses, and other offices in New Haven, Connecticut, that had telephones. The directory was not alphabetized and no numbers were included with the people listed in it. In 1879, Dr. Moses Greeley Parker suggested the format of the telephone directory be changed so that subscribers appeared in alphabetical order and each telephone be identified with a number. Parker came to this idea out of fear that Lowell, Massachusetts's four operators would contract measles and be unable to connect telephone subscribers to one another. The first British telephone directory was published on 15 January 1880 by The Telephone Company. It contained 248 names and addresses of individuals and businesses in London; telephone numbers were not used at the time as subscribers were asked for by name at the exchange. The directory is preserved as part of the British phone book collection by BT Archives. The Reuben H. Donnelly company asserts that it published the first classified directory, or yellow pages, for Chicago, Illinois, in 1886. In 1938, AT&T commissioned the creation of a new typeface, known as Bell Gothic, the purpose of which was to be readable at very small font sizes when printed on newsprint where small imperfections were common. In 1981, France became the first country to have an electronic directory on a system called Minitel. The directory is called "11" after its telephone access number. In 1991, the U.S. Supreme Court ruled (in Feist v. Rural) that telephone companies do not have a copyright on telephone listings, because copyright protects creativity and not the mere labor of collecting existing information. In late July 1995 Kapitol launched the Infobel.be website. Infobel was then the first telephone directory website launched on the then-nascent Internet. In 1996, in the US the first telephone directories went online. Yellowpages.com and Whitepages.com both saw their start in April. In 1999, the first online telephone directories and people-finding sites such as LookupUK.com went online in the UK. In 2003, more advanced UK searching including Electoral Roll became available on LocateFirst.com. With online directories, and with many people giving up landlines for cell phones whose numbers are not listed in telephone directories, printed directories are no longer as necessary as they once were. Regulators no longer required that residential listings be printed, starting with New York in 2010. Yellow pages continued to be printed because some advertisers still reached consumers that way. In the 21st century, printed telephone directories are increasingly criticized as waste. In 2012, after some North American cities passed laws banning the distribution of telephone books, an industry group sued and obtained a court ruling permitting the distribution to continue. In 2010, manufacture and distribution of telephone directories produced over 1,400,000 metric tons of greenhouse gases and consumed over 600,000 tons of paper annually. Reverse directories A reverse telephone directory is sorted by phone number, so the name and address of a subscriber is looked up by phone number. See also City directory References Further reading External links Phone Book of the World.com 1878 introductions American inventions Directories History of the telephone Telephone numbers
Telephone directory
Mathematics
1,458
16,851,994
https://en.wikipedia.org/wiki/PRRX2
Paired mesoderm homeobox protein 2 is a protein that in humans is encoded by the PRRX2 gene. Function The DNA-associated protein encoded by this gene is a member of the paired family of homeobox proteins. Expression is localized to proliferating fetal fibroblasts and the developing dermal layer, with downregulated expression in adult skin. Increases in expression of this gene during fetal but not adult wound healing suggest a possible role in mechanisms that control mammalian dermal regeneration and prevent formation of scar response to wounding. The expression patterns provide evidence consistent with a role in fetal skin development and a possible role in cellular proliferation. References Further reading Transcription factors
PRRX2
Chemistry,Biology
140
9,659,931
https://en.wikipedia.org/wiki/Interferon%20type%20III
The type III interferon group is a group of anti-viral cytokines, that consists of four IFN-λ (lambda) molecules called IFN-λ1, IFN-λ2, IFN-λ3 (also known as IL29, IL28A and IL28B respectively), and IFN-λ4. They were discovered in 2003. Their function is similar to that of type I interferons, but is less intense and serves mostly as a first-line defense against viruses in the epithelium. Genomic location Genes encoding this group of interferons are all located on the long arm of chromosome 19 in human, specifically in region between 19q13.12 and 19q13.13. The IFNL1 gene, encoding IL-29, is located downstream of IFNL2, encoding IL-28A. IFNL3, encoding IL28B, is located downstream of IFNL4. In mice, the genes encoding for type III interferons are located on chromosome 7 and the family consists only of IFN-λ2 and IFN-λ3. Structure Interferons All interferon groups belong to class II cytokine family which have a conserved structure that comprises six α-helices. The proteins of type III interferon group are highly homologous and show high amino acid sequence similarity between. The similarity between IFN-λ2 and IFN-λ3 is approximately 96%, similarity of IFNλ1 to IFNλ 2/3 is around 81%. Lowest similarity is found between IFN-λ4 and IFN-λ3 - only around 30%. Unlike type I interferon group, which consist of only one exon, type III interferons consist of multiple exons. Receptor The receptors for these cytokines are also structurally conserved. The receptors have two type III fibronectin domains in their extracellular domain. The interface of these two domains forms the cytokine binding site. The receptor complex for type III interferons consists of two subunits - IL10RB (also called IL10R2 or CRF2-4) and IFNLR1 (formerly called IL28RA, CRF2-12). In contrast to the ubiquitous expression of receptors for type I interferons, IFNLR1 is largely restricted to tissues of epithelial origin. Despite high homology between type III interferons, the binding affinity to IFNLR1 differ, with IFN-λ1 showing the highest binding affinity, and IFN-λ3 showing the lowest binding affinity. Signalling pathway IFN-λ production is induced by pathogen sensing through pattern recognition receptors (PRR), including TLR, Ku70 and RIG-I-like. The main producer of IFN-λ are type 2 myeloid dendritic cells. IFN-λ binds to IFNLR1 with a high affinity, which then recruits the low-affinity subunit of the receptor, IL10Rb. This interaction creates a signalling complex. Upon binding of the cytokine to the receptor, JAK-STAT signalling pathway gets activated, specifically JAK1 and TYK2 and phosphorylate and activate STAT-1 and STAT-2, which then induces downstream signalling that leads to induction of expression of hundreds of IFN-stimulated genes (ISG), e.g.: NF-κB, IRF, ISRE, Mx1, OAS1. The signalling is modulated by suppressor of cytokine signalling 1 (SOCS1) and ubiquitin-specific peptidase 18 (USP18). Function Functions of type III interferons overlap largely with that of type I interferons. Both of these cytokine groups modulate the immune response after a pathogen has been sensed in the organism, their functions are mostly anti-viral and anti-proliferative. However, type III interferons tend to be less inflammatory and show a slower kinetics than type I. Also, because of the restricted expression of IFNLR1, the immunomodulatory effect of type III interferons is limited. Because the receptors for type I and type II interferons are expressed on almost all nucleated cells, their function is rather systemic. Type III interferon receptors are expressed more specifically on epithelial cells and some immune cells such as neutrophils, and depending on the species, B cells and dendritic cells as well. Therefore, their antiviral effects are most prominent in barriers, in gastrointestinal, respiratory and reproductive tracts. Type III interferons usually act as the first line of defense against viruses at the barriers. In the gastrointestinal tract, both type I and type III interferons are needed to effectively fight reovirus infection. Type III interferons restrict the initial replication of the virus and diminish the shedding of through feces, while type I interferons prevent the systematic infection. On the other hand, in the respiratory tract these two groups of interferons seem to be rather redundant, as documented by the susceptibility of double-deficient mice (in receptors for type I and type III interferons), but the resistance to respiratory virus in mice that are deficient in either type I or type III interferon receptors. Additional gastrointestinal viruses such as rotavirus and norovirus, as well as non-gastrointestinal viruses like influenza and West Nile virus, are also restricted by type III interferons. References Cytokines Antiviral drugs
Interferon type III
Chemistry,Biology
1,170
216,650
https://en.wikipedia.org/wiki/Ferrimagnetism
A ferrimagnetic material is a material that has populations of atoms with opposing magnetic moments, as in antiferromagnetism, but these moments are unequal in magnitude, so a spontaneous magnetization remains. This can for example occur when the populations consist of different atoms or ions (such as Fe2+ and Fe3+). Like ferromagnetic substances, ferrimagnetic substances are attracted by magnets and can be magnetized to make permanent magnets. The oldest known magnetic substance, magnetite (Fe3O4), is ferrimagnetic, but was classified as a ferromagnet before Louis Néel discovered ferrimagnetism in 1948. Since the discovery, numerous uses have been found for ferrimagnetic materials, such as hard-drive platters and biomedical applications. History Until the twentieth century, all naturally occurring magnetic substances were called ferromagnets. In 1936, Louis Néel published a paper proposing the existence of a new form of cooperative magnetism he called antiferromagnetism. While working with Mn2Sb, French physicist Charles Guillaud discovered that the current theories on magnetism were not adequate to explain the behavior of the material, and made a model to explain the behavior. In 1948, Néel published a paper about a third type of cooperative magnetism, based on the assumptions in Guillaud's model. He called it ferrimagnetism. In 1970, Néel was awarded for his work in magnetism with the Nobel Prize in Physics. Physical origin Ferrimagnetism has the same physical origins as ferromagnetism and antiferromagnetism. In ferrimagnetic materials the magnetization is also caused by a combination of dipole–dipole interactions and exchange interactions resulting from the Pauli exclusion principle. The main difference is that in ferrimagnetic materials there are different types of atoms in the material's unit cell. An example of this can be seen in the figure above. Here the atoms with a smaller magnetic moment point in the opposite direction of the larger moments. This arrangement is similar to that present in antiferromagnetic materials, but in ferrimagnetic materials the net moment is nonzero because the opposed moments differ in magnitude. Ferrimagnets have a critical temperature above which they become paramagnetic just as ferromagnets do. At this temperature (called the Curie temperature) there is a second-order phase transition, and the system can no longer maintain a spontaneous magnetization. This is because at higher temperatures the thermal motion is strong enough that it exceeds the tendency of the dipoles to align. Derivation There are various ways to describe ferrimagnets, the simplest of which is with mean-field theory. In mean-field theory the field acting on the atoms can be written as where is the applied magnetic field, and is field caused by the interactions between the atoms. The following assumption then is Here is the average magnetization of the lattice, and is the molecular field coefficient. When we allow and to be position- and orientation-dependent, we can then write it in the form where is the field acting on the i-th substructure, and is the molecular field coefficient between the i-th and k-th substructures. For a diatomic lattice we can designate two types of sites, a and b. We can designate the number of magnetic ions per unit volume, the fraction of the magnetic ions on the a sites, and the fraction on the b sites. This then gives It can be shown that and that unless the structures are identical. favors a parallel alignment of and , while favors an anti-parallel alignment. For ferrimagnets, , so it will be convenient to take as a positive quantity and write the minus sign explicitly in front of it. For the total fields on a and b this then gives Furthermore, we will introduce the parameters and which give the ratio between the strengths of the interactions. At last we will introduce the reduced magnetizations with the spin of the i-th element. This then gives for the fields: The solutions to these equations (omitted here) are then given by where is the Brillouin function. The simplest case to solve now is . Since , this then gives the following pair of equations: with and . These equations do not have a known analytical solution, so they must be solved numerically to find the temperature dependence of . Effects of temperature Unlike ferromagnetism, the magnetization curves of ferrimagnetism can take many different shapes depending on the strength of the interactions and the relative abundance of atoms. The most notable instances of this property are that the direction of magnetization can reverse while heating a ferrimagnetic material from absolute zero to its critical temperature, and that strength of magnetization can increase while heating a ferrimagnetic material to the critical temperature, both of which cannot occur for ferromagnetic materials. These temperature dependencies have also been experimentally observed in NiFe2/5Cr8/5O4 and Li1/2Fe5/4Ce5/4O4. A temperature lower than the Curie temperature, but at which the opposing magnetic moments are equal (resulting in a net magnetic moment of zero) is called a magnetization compensation point. This compensation point is observed easily in garnets and rare-earth–transition-metal alloys (RE-TM). Furthermore, ferrimagnets may also have an angular momentum compensation point, at which the net angular momentum vanishes. This compensation point is crucial for achieving fast magnetization reversal in magnetic-memory devices. Effect of external fields When ferrimagnets are exposed to an external magnetic field, they display what is called magnetic hysteresis, where magnetic behavior depends on the history of the magnet. They also exhibit a saturation magnetization ; this magnetization is reached when the external field is strong enough to make all the moments align in the same direction. When this point is reached, the magnetization cannot increase, as there are no more moments to align. When the external field is removed, the magnetization of the ferrimagnet does not disappear, but a nonzero magnetization remains. This effect is often used in applications of magnets. If an external field in the opposite direction is applied subsequently, the magnet will demagnetize further until it eventually reaches a magnetization of . This behavior results in what is called a hysteresis loop. Properties and uses Ferrimagnetic materials have high resistivity and have anisotropic properties. The anisotropy is actually induced by an external applied field. When this applied field aligns with the magnetic dipoles, it causes a net magnetic dipole moment and causes the magnetic dipoles to precess at a frequency controlled by the applied field, called Larmor or precession frequency. As a particular example, a microwave signal circularly polarized in the same direction as this precession strongly interacts with the magnetic dipole moments; when it is polarized in the opposite direction, the interaction is very low. When the interaction is strong, the microwave signal can pass through the material. This directional property is used in the construction of microwave devices like isolators, circulators, and gyrators. Ferrimagnetic materials are also used to produce optical isolators and circulators. Ferrimagnetic minerals in various rock types are used to study ancient geomagnetic properties of Earth and other planets. That field of study is known as paleomagnetism. In addition, it has been shown that ferrimagnets such as magnetite can be used for thermal energy storage. Examples The oldest known magnetic material, magnetite, is a ferrimagnetic substance. The tetrahedral and octahedral sites of its crystal structure exhibit opposite spin. Other known ferrimagnetic materials include yttrium iron garnet (YIG); cubic ferrites composed of iron oxides with other elements such as aluminum, cobalt, nickel, manganese, and zinc; and hexagonal or spinel type ferrites, including rhenium ferrite, ReFe2O4, PbFe12O19 and BaFe12O19 and pyrrhotite, Fe1−xS. Ferrimagnetism can also occur in single-molecule magnets. A classic example is a dodecanuclear manganese molecule with an effective spin S = 10 derived from antiferromagnetic interaction on Mn(IV) metal centers with Mn(III) and Mn(II) metal centers. See also References External links Magnetic ordering Quantum phases
Ferrimagnetism
Physics,Chemistry,Materials_science,Engineering
1,812
78,240,764
https://en.wikipedia.org/wiki/BMB-101
BMB-101 is a serotonin 5-HT2C receptor agonist which is under development for the treatment of absence epilepsy, Pitt-Hopkins syndrome, Dravet syndrome, binge-eating disorder, Lennox-Gastaut syndrome, and opioid-related disorders. It is taken by mouth. The drug acts as a highly selective biased agonist of the serotonin 5-HT2C receptor. It has greater that 100-fold selectivity for the serotonin 5-HT2C receptor over other serotonin receptors, including the serotonin 5-HT2A and 5-HT2B receptors. BMB-101 shows functional selectivity at the serotonin 5-HT2C receptor for activation of Gq signaling with minimal β-arrestin recruitment. This in turn appears to minimize receptor desensitization and development of tolerance. Due to its much greater selectivity for the serotonin 5-HT2C receptor, BMB-101 is not expected to possess the psychedelic effects or cardiotoxicity that have been associated with existing drugs like fenfluramine and lorcaserin at therapeutic or supratherapeutic doses. In accordance with its mechanism of action, BMB-101 produces anticonvulsant effects in animals. BMB-101 is under development by Bright Minds Biosciences. As of October 2023, it is in phase 2 clinical trials for absence epilepsy and Pitt-Hopkins syndrome, phase 1 clinical trials for Dravet syndrome, and is in preclinical research for binge-eating disorder, Lennox-Gastaut syndrome, and opioid-related disorders. The chemical structure of BMB-101 does not yet appear to have been disclosed. See also Bexicaserin Vabicaserin References External links BMB-101 - Bright Minds Biosciences 5-HT2C agonists Anticonvulsants Biased ligands Drugs with undisclosed chemical structures Experimental drugs
BMB-101
Chemistry
424
14,398,725
https://en.wikipedia.org/wiki/Katanosin
Katanosins are a group of antibiotics (also known as lysobactins). They are natural products with strong antibacterial potency. So far, katanosin A and katanosin B (lysobactin) have been described. Sources Katanosins have been isolated from the fermentation broth of microorganisms, such as Cytophaga. or the Gram-negative bacterium Lysobacter sp. Structure Katanosins are cyclic depsipeptides (acylcyclodepsipeptides). These non-proteinogenic structures are not ordinary proteins derived from primary metabolism. Rather, they originate from bacterial secondary metabolism. Accordingly, various non-proteinogenic (non-ribosomal) amino acids are found in katanosins, such as 3-hydroxyleucine, 3-hydroxyasparagine, allothreonine and 3-hydroxyphenylalanine. All katanosins have a cyclic and a linear segment (“lariat structure”). The peptidic ring is closed with an ester bond (lactone). Katanosin A and B differ in the amino acid position 7. The minor metabolite katanosin A has a valine in this position, whereas the main metabolite katanosin B carries an isoleucine. Biological activity Katanosin antibiotics target the bacterial cell wall biosynthesis. They are highly potent against problematic Gram-positive hospital pathogens such as staphylococci and enterococci. Their promising biological activity attracted various biological and chemical research groups. Their in-vitro potency is comparable with the current “last defence” antibiotic vancomycin. Chemical synthesis The first total syntheses of katanosin B (lysobactin) have been described in 2007. References Antibiotics Depsipeptides
Katanosin
Biology
392
5,302,952
https://en.wikipedia.org/wiki/Defective%20matrix
In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an matrix is defective if and only if it does not have linearly independent eigenvectors. A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving defective systems of ordinary differential equations and other problems. An defective matrix always has fewer than distinct eigenvalues, since distinct eigenvalues always have linearly independent eigenvectors. In particular, a defective matrix has one or more eigenvalues with algebraic multiplicity (that is, they are multiple roots of the characteristic polynomial), but fewer than linearly independent eigenvectors associated with . If the algebraic multiplicity of exceeds its geometric multiplicity (that is, the number of linearly independent eigenvectors associated with ), then is said to be a defective eigenvalue. However, every eigenvalue with algebraic multiplicity always has linearly independent generalized eigenvectors. A real symmetric matrix and more generally a Hermitian matrix, and a unitary matrix, is never defective; more generally, a normal matrix (which includes Hermitian and unitary matrices as special cases) is never defective. Jordan block Any nontrivial Jordan block of size or larger (that is, not completely diagonal) is defective. (A diagonal matrix is a special case of the Jordan normal form with all trivial Jordan blocks of size and is not defective.) For example, the Jordan block has an eigenvalue, with algebraic multiplicity (or greater if there are other Jordan blocks with the same eigenvalue), but only one distinct eigenvector , where The other canonical basis vectors form a chain of generalized eigenvectors such that for . Any defective matrix has a nontrivial Jordan normal form, which is as close as one can come to diagonalization of such a matrix. Example A simple example of a defective matrix is which has a double eigenvalue of 3 but only one distinct eigenvector (and constant multiples thereof). See also Notes References Linear algebra Matrices
Defective matrix
Mathematics
455
45,684,913
https://en.wikipedia.org/wiki/Plant%20intelligence
Plant intelligence is a field of plant biology which aims to understand how plants process the information they obtain from their environment. Plant intelligence has been defined as "any type of intentional and flexible behavior that is beneficial and enables the organism to achieve its goal". Plant neurobiology is a subfield of plant intelligence research that claims plants possess abilities associated with cognition including anticipation, decision making, learning and memory. Terminology used in plant neurobiology is rejected by the majority of plant scientists as misleading as plants do not possess consciousness or neurons. History Early research In 1811, James Perchard Tupper authored An Essay on the Probability of Sensation in Vegetables which argued that plants possess a low form of sensation. He has been cited as an early botanist "attracted to the notion that the ability of plants to feel pain or pleasure demonstrated the universal beneficence of a Creator". The notion that plants are capable of feeling emotions was first recorded in 1848, when Gustav Fechner, an experimental psychologist, suggested that plants are capable of emotions and that one could promote healthy growth with talk, attention, attitude, and affection. Federico Delpino wrote about plant intelligence in 1867. The idea of cognition in plants was explored by Charles Darwin in 1880 in the book The Power of Movement in Plants, co-authored with his son Francis. Using a neurological metaphor, he described the sensitivity of plant roots in proposing that the tip of roots acts like the brain of some lower animals. This involves reacting to sensation in order to determine their next movement. Darwin's "root-brain hypothesis" influenced those in the field of plant neurobiology many years later. John Ellor Taylor in his 1884 book The Sagacity and Morality of Plants argued that plants are conscious agents. Jagadish Chandra Bose invented various devices and instruments to measure electrical responses in plants. According to biologist Patrick Geddes "In his investigations on response in general Bose had found that even ordinary plants and their different organs were sensitive— exhibiting, under mechanical or other stimuli, an electric response, indicative of excitation." One visitor to his laboratory, the vegetarian playwright George Bernard Shaw, was intensely disturbed upon witnessing a demonstration in which a cabbage had "convulsions" as it boiled to death. Jagadish Chandra Bose is considered an important forerunner of plant neurobiology by proponents of plant cognition. Bose was the author of The Nervous Mechanism of Plants, published in 1926. Karl F. Kellerman, Associate Chief of the Bureau of Plant Industry, United States Department of Agriculture criticized Bose's interpretation of the results from his experiments, stating that he failed to prove the conclusions from his reports that plants feel pain. Kellerman commented that "Sir Jagadar passed an electric current through plants, and his instruments recorded a break in the current. Such variations in resistance to electric current are found even when passing a current through dead matter". In 1900, ornithologist Thomas G. Gentry authored Intelligence in Plants and Animals which argued that plants have consciousness. Historian Ed Folsom described it as "an exhaustive investigation of how such animals as bees, ants, worms and buzzards, as well as all kinds of plants, display intelligence and thus have souls". Captain Arthur Smith in the early 1900s authored the first article on "plant consciousness". In 1905, Rev. Charles Fletcher Argyll Saxby authored a pamphlet, Do Plants Think? Some speculations concerning a neurology and psychology of plants. Maurice Maeterlinck wrote about the intelligence of flowers in 1907. Royal Dixon in his 1914 book, The Human Side of Plants argued that plants are sentient and have minds and souls. Cleve Backster In the 1960s Cleve Backster, an interrogation specialist with the CIA, conducted research that led him to believe that plants can feel and respond to emotions and intents from other organisms including humans. Backster's interest in the subject began in February 1966 when he tried to measure the rate at which water rises from a philodendron's root into its leaves. Because a polygraph or "lie detector" can measure electrical resistance, which would alter when the plant was watered, he attached a polygraph to one of the plant's leaves. Backster stated that, to his immense surprise, "the tracing began to show a pattern typical of the response you get when you subject a human to emotional stimulation of short duration". His ideas about primary perception (plants responding to emotions and intents) became known as the "Backster effect". In 1975, K. A. Horowitz, D. C. Lewis and E. L. Gasteiger published an article in Science giving their results when repeating one of Backster's effectsplant response to the killing of brine shrimp in boiling water. The researchers grounded the plants to reduce electrical interference and rinsed them to remove dust particles. As a control, three of five pipettes contained brine shrimp while the remaining two only had water; the pipettes were delivered to the boiling water at random. This investigation used a total of 60 brine shrimp deliveries to boiling water while Backster's had used 13. Positive correlations did not occur at a rate great enough to be considered statistically significant. Other controlled experiments that attempted to replicate Backster's findings also produced negative results. Botanist Arthur Galston and physiologist Clifford L. Slayman who investigated Backster's claims wrote: There is no objective scientific evidence for the existence of such complex behaviour in plants. The recent spate of popular literature on "plant consciousness" appears to have been triggered by "experiments" with a lie detector, subsequently reported and embellished in a book called The Secret Life of Plants. Unfortunately, when scientists in the discipline of plant physiology attempted to repeat the experiments, using either identical or improved equipment, the results were uniformly negative. Further investigation has shown that the original observations probably arose from defective measuring procedures. John M. Kmetz noted that the Backster effect was based on observations of only seven plants which nobody including Backster was able to replicate. The television show MythBusters also performed experiments (season 4, episode 18, 2006) to test the concept. The tests involved connecting plants to a polygraph galvanometer and employing actual and imagined harm upon the plants or upon others in the plants' vicinity. The galvanometer showed a reaction about one third of the time. The experimenters, who were in the room with the plant, posited that the vibrations of their actions or the room itself could have affected the polygraph. After isolating the plant, the polygraph showed a response slightly less than one third of the time. Later experiments with an EEG failed to detect anything. The show concluded that the results were not repeatable, and that the theory was not true. Backster's research was cited in the pseudoscientific book The Secret Life of Plants in 1973. Whilst the book captured public attention it severely damaged the credibility of the field of plant intelligence. Philosopher Yogi H. Hendlin noted that the book's "combination of haphazard, panpsychist metaphysical speculations and unmethodical citizen science stigmatised legitimate progressive plant research, alongside the era’s new-age pseudoscience, tarring the discipline’s serious inquiry". Dorothy Retallack In 1973, Dorothy Retallack authored The Sound of Music and Plants. In the book Retallack records experiments she conducted at Temple Buell College on applying different music to plants. She stated that the plants died in response to acid rock but flourished in response to classical music and jazz. The experiments were described as pseudoscientific as they were poorly designed and did not control for other factors such as humidity, light or water. Colorado Women's College was embarrassed by the experiments. Modern research Anthony Trewavas is credited with reintroducing the idea of plant intelligence in the early 2000s. In 2003, Trewavas led a study to see how the roots interact with one another and study their signal transduction methods. He was able to draw similarities between water stress signals in plants affecting developmental changes and signal transductions in neural networks causing responses in muscle. Particularly, when plants are under water stress, there are abscisic acid dependent and independent effects on development. This brings to light further possibilities of plant decision-making based on its environmental stresses. The integration of multiple chemical interactions show evidence of the complexity in these root systems. In 2012, Paco Calvo Garzón and Fred Keijzer speculated that plants exhibited structures equivalent to (1) action potentials (2) neurotransmitters and (3) synapses. Also, they stated that a large part of plant activity takes place underground, and that the notion of a 'root brain' was first mooted by Charles Darwin in 1880. Free movement was not necessarily a criterion of cognition, they held. The authors gave five conditions of minimal cognition in living beings, and concluded that 'plants are cognitive in a minimal, embodied sense that also applies to many animals and even bacteria.' In 2017 biologists from University of Birmingham announced that they found a "decision-making center" in the root tip of dormant Arabidopsis seeds. In 2014, Anthony Trewavas released a book called Plant Behavior and Intelligence that highlighted a plant's cognition through its colonial-organization skills reflecting insect swarm behaviors. This organizational skill reflects the plant's ability to interact with its surroundings to improve its survivability, and a plant's ability to identify exterior factors. Evidence of the plant's minimal cognition of spatial awareness can be seen in their root allocation relative to neighboring plants. The organization of these roots have been found to originate from the root tip of plants. On the other hand, Peter A. Crisp and his colleagues proposed a different view on plant memory in their review: plant memory could be advantageous under recurring and predictable stress; however, resetting or forgetting about the brief period of stress may be more beneficial for plants to grow as soon as the desirable condition returns. Affifi (2018) proposed an empirical approach to examining the ways plants model coordinate goal-based behaviour to environmental contingency as a way of understanding plant learning. According to this author, associative learning will only demonstrate intelligence if it is seen as part of teleologically integrated activity. Otherwise, it can be reduced to mechanistic explanation. In 2017 Yokawa, K. et al. found that, when exposed to anesthetics, a number of plants lost both their autonomous and touch-induced movements. Venus flytraps no longer generate electrical signals and their traps remain open when trigger hairs were touched, and growing pea tendrils stopped their autonomous movements and were immobilized in a curled shape. Raja et al (2020) found that potted French bean plants, when planted 30 centimetres from a garden cane, would adjust their growth patterns to enable themselves to use the cane as a support in the future. Raja later stated that "If the movement of plants is controlled and affected by objects in their vicinity, then we are talking about more complex behaviours (rather than simple) reactions". Raja proposed that researchers should look for corresponding cognitive signatures. A minority of researchers within the field of plant neurobiology argue that plants are conscious organisms. Peter Wohlleben argued for plant sentience in his 2016 book The Hidden Life of Trees. The book was widely criticized by biologists and forest scientists for using strong anthropomorphic and teleological language such as describing trees as having friendships and registering fear, love and pain. It has been described as containing a "conglomeration of half-truths, biased judgements, and wishful thinking". František Baluška argues for a model called the Cellular Basis of Consciousness (CBC) which proposes that all cells are conscious. The model has been criticized for being based on only speculation and lacking empirical evidence for its claim that cells have consciousness. Organizations Modern research on plant cognition is conducted by researchers associated with the Society for Plant Neurobiology that was established in 2005. Due to criticisms from botanists and complaints from early members that affiliations with the Society were negatively impacting their careers, the Society was renamed the Society of Plant Signaling and Behavior (SPSB) in 2009. Research on plant intelligence is also conducted by the International Laboratory of Plant Neurobiology headed by Stefano Mancuso. It has been described as "the world's only laboratory dedicated to plant intelligence". Criticism The idea of plant cognition is a source of controversy and is rejected by the majority of plant scientists. Plant neurobiology has been criticized for misleading the public with false terminology. There is no scientific evidence that plants possess consciousness or are sentient. Amadeo Alpi and 35 other scientists published an article in 2007 titled "Plant Neurobiology: No Brain, No Gain?" in Trends in Plant Science. In this article, they argue that since there is no evidence for the presence of neurons in plants, the idea of plant neurobiology and cognition is unfounded and needs to be redefined. They commented that "plant neurobiology does not add to our understanding of plant physiology, plant cell biology or signaling". In response to this article, Francisco Calvo Garzón published an article in Plant Signaling and Behavior. He states that, while plants do not have neurons as animals do, they do possess an information-processing system composed of cells. He argues that this system can be used as a basis for discussing the cognitive abilities of plants. See also Plant communication Plant perception (physiology) References Further reading Plant intelligence and neurobiology Criticism External links Plant perception (a.k.a. the Backster effect) Society of Plant Signaling and Behavior (formerly Society for Plant Neurobiology) Branches of botany Plant communication
Plant intelligence
Biology
2,825
50,728,706
https://en.wikipedia.org/wiki/Oscillatory%20baffled%20reactor
A Continuous Oscillatory Baffled Reactor (COBR) is a specially designed chemical reactor to achieve plug flow under laminar flow conditions. Achieving plug flow has previously been limited to either a large number of continuous stir tank reactors (CSTR) in series or conditions with high turbulent flow. The technology incorporates annular baffles to a tubular reactor framework to create eddies when liquid is pushed up through the tube. Likewise, when liquid is on a downstroke through the tube, eddies are created on the other side of the baffles. Eddy generation on both sides of the baffles creates very effective mixing while still maintaining plug flow. By using COBR, potentially higher yields of product can be made with greater control and reduced waste. Design A standard COBR consists of a 10-150mm ID tube with equally spaced baffles throughout. There are typically two pumps in a COBR; one pump is reciprocating to generate continuous oscillatory flow and a second pump creates net flow through the tube. This design offers a control over mixing intensity that conventional tubular reactors cannot achieve. Each baffled cell acts as a CSTR and because a secondary pump is creating a net laminar flow, much longer residence times can be achieved relative to turbulent flow systems. With conventional tubular reactors, mixing is accomplished through stirring mechanisms or turbulent flow conditions, which is difficult to control. By changing variable values such as baffle spacing or thickness, COBRs can operate with much better mixing control. For instance, it has been found that a spacing of 1.5 times tube diameter size is the most effective mixing condition; furthermore, vortex deformation increases with increase in baffle thickness greater than 3mm. Biological applications The low shear rate and enhanced mass transfer provided by the COBR makes it an ideal reactor for various biological processes. For shear rate, it has been found that COBRs have an evenly distributed, five-fold reduction in shear rate relative to conventional tubular reactors; this is especially important for biological process given that high shear rates can damage microorganisms. For the case of mass transfer, COBR fluid mechanics allows for an increase in oxygen gas residence time. Furthermore, the vortexes created in the COBRs causes a gas bubble break-up and thus an increase in surface area for gas transfer. For aerobic biological processes, therefore, COBRs again present an advantage. An especially promising aspect of the COBR technology is its ability to scale-up processes while still retaining the advantages in shear rate and mass transfer. Limitations Though the prospect for COBR applications in fields like bioprocessing are very promising, there are a number of necessary improvements to be made before more global use. Clearly, there is additional complexity in the COBR design relative to other bioreactors, which can introduce complications in operation. Furthermore, for bioprocessing it is possible that fouling of baffles and internal surfaces becomes an issue. Perhaps the most significant needed advancement moving forward is further comprehensive studies that COBR technology can indeed be useful in industry. There are currently no COBRs in use at industrial bioprocessing plants and the evidence of its effectiveness, though very promising and theoretically an improvement relative to current reactors in industry, is limited to smaller laboratory-scale experiments. References Bioreactors
Oscillatory baffled reactor
Chemistry,Engineering,Biology
676
942,255
https://en.wikipedia.org/wiki/Nuclear%20navy
A nuclear navy, or nuclear-powered navy, refers to the portion of a navy consisting of naval ships powered by nuclear marine propulsion. The concept was revolutionary for naval warfare when first proposed. Prior to nuclear power, submarines were powered by diesel engines and could only submerge through the use of batteries. In order for these submarines to run their diesel engines and charge their batteries they would have to surface or snorkel. The use of nuclear power allowed these submarines to become true submersibles and unlike their conventional counterparts, they became limited only by crew endurance and supplies. Nuclear-powered aircraft carriers Currently, only the United States and France possess nuclear-powered aircraft-carriers. The United States Navy has by far the most nuclear-powered aircraft carriers, with ten carriers and one carrier in service. The last conventionally-powered aircraft carrier left the U.S. fleet as of 12 May 2009, when the USS Kitty Hawk was deactivated. France's latest aircraft carrier, the , is nuclear-powered. The United Kingdom rejected nuclear power early in the development of its s on cost grounds, as even several decades of fuel use costs less than a nuclear reactor. Since 1949 the Bettis Atomic Power Laboratory near Pittsburgh, Pennsylvania has been one of the lead laboratories in the development of the nuclear navy. The planned indigenous Chinese carriers also feature nuclear propulsion. Nuclear-powered submarines The United States Navy operates the largest fleet of nuclear submarines. Only the United States Navy, the Royal Navy of the United Kingdom, and France's Marine Nationale field an all-nuclear submarine force. By 1989, there were over 400 nuclear-powered submarines operational or being built. Some 250 of these submarines have now been scrapped and some on order cancelled, due to weapons reduction programs. Russia and the United States had over one hundred each, with the United Kingdom and France fewer than twenty each and China six. The Indian Navy launched their first indigenous nuclear-powered submarines on 26 July 2009. India is also operating one nuclear attack submarine with talks of leasing one more nuclear submarine from Russia. India plans to build six nuclear attack submarines and follow on to the Arihant class of ballistic missile submarines. Nuclear-powered cruisers The US had several nuclear cruisers. The cruisers were the USS Bainbridge, USS California, USS Long Beach, USS Truxtun, USS South Carolina, USS Virginia, USS Texas, USS Mississippi, and USS Arkansas. The Long Beach was deemed too expensive and was decommissioned in 1995 instead of receiving its third nuclear refueling and proposed upgrade. It was sold for scrap in 2012 at Puget Sound Naval Shipyard. Currently the United States does not have any nuclear cruisers. Russia has four s, though only one is active, the other three being laid up. The command ship SSV-33 Ural, based on the Kirov class, is also laid up. Seven civilian nuclear icebreakers remain in service: four of six s, the two Taymyr-class icebreakers Taymyr and Vaygach, and the LASH carrier and container ship . United States Navy By 2003 the U.S. Navy had accumulated over 5,400 "reactor years" of accident-free experience, and operated more than 80 nuclear-powered ships. Admiral Hyman G. Rickover Admiral Hyman G. Rickover, (1900–1986), of the United States Navy, known as "father of the nuclear navy" was an electrical engineer by training, and was the primary architect who implemented this daring concept, and believed that it was the natural next phase for the way military vessels could be propelled and powered. The challenge was to reduce the size of a nuclear reactor to fit on board a ship or submarine, as well as to encase it sufficiently so that radiation hazards would not be a safety concern. Soon after World War II, Rickover was assigned to the Bureau of Ships in September 1947 and received training in nuclear power at Oak Ridge, Tennessee. In February 1949 he received an assignment to the Division of Reactor Development, U.S. Atomic Energy Commission and then assumed control of the United States Navy's effort as Director of the Naval Reactors Branch in the Bureau of Ships. This dual role allowed him to lead the efforts to develop the world's first nuclear-powered submarine, , which was launched in 1954. As Vice Admiral, from 1958, for three decades Rickover exercised tight control over the ships, technology, and personnel of the nuclear navy, even interviewing every prospective officer for new nuclear-powered navy vessels. Philip Abelson Leading nuclear physicist Philip Abelson (1913–2004) turned his attention under the guidance of Ross Gunn to applying nuclear power to naval propulsion. Their early efforts at Naval Research Laboratory (NRL) provided an early glimpse at what was to become the nuclear Navy. United States Naval reactors At the present time, many important vessels in the United States Navy are powered by nuclear reactors. All submarines and aircraft carriers are nuclear-powered. Several cruisers were nuclear-powered but these have all been retired. United States naval reactors are given three-character designations consisting of a letter representing the ship type the reactor is designed for, a consecutive generation number, and a letter indicating the reactor's designer. The ship types are "A" for aircraft carrier, "C" for cruiser, "D" for destroyer, and "S" for submarine. The designers are "W" for Westinghouse, "G" for General Electric, "C" for Combustion Engineering, and "B" for Bechtel. Examples are S5W, D1G, A4W, and D2W. Information concerning United States naval reactors may or may not be classified (see Naval Nuclear Propulsion Information). Accidents involving naval nuclear-powered vessels United States (1963; Thresher/Permit-class; sank, 129 killed) (1968; Skipjack-class; sank, 99 killed) Both sank for reasons unrelated to their reactor plants and still lie on the Atlantic sea floor. Russian or Soviet K-8 (1960; November-class submarine; loss of coolant) K-19 (1961; Hotel-class submarine; two loss of coolant accidents, 27 killed due to one accident) K-11 (1965; November-class submarine; two refueling criticalities) K-159 (1965; November-class submarine; radioactive discharge) Lenin (1965; Lenin-class icebreaker; loss of coolant) Lenin (1967; Lenin-class icebreaker; loss of coolant) K-140 (1968; Yankee-class submarine; power excursion) K-8 (loss of coolant) (1970; November-class submarine; sank after fire, 52 killed) K-320 (1970; Charlie I-class submarine; uncontrolled startup) K-116 (1979; Echo II-class submarine; reactor accident) K-122 (1980; Echo I-class submarine; fire, 14 killed) K-222 (1980; Papa-class submarine; uncontrolled startup) K-27 (1982; Modified November-class submarine; scuttled) K-123 (1982; Alfa-class submarine; loss of coolant) K-429 (1983; Charlie I-class submarine; sank due to improper work at shipyard, 16 killed) K-431 (1985; Echo II-class submarine; refueling criticality, 10 killed) K-429 (1985; Charlie I-class submarine; sank at moorings) K-219 (1986; Yankee I-class submarine; sank, 6 killed) K-278 Komsomolets (1989; Mike-class submarine; sank, 42 killed) K-192 (1989; Echo II-class submarine; loss of coolant) K-141 ''Kursk (2000; Oscar II-class submarine; sank, 118 killed) K-159 (2003; November-class submarine; sank under tow, 9 killed) While not all of these were reactor accidents, they have a major impact on nuclear marine propulsion and the global politics because they happened to nuclear vessels. Only four Soviet nuclear submarines accidentally sank with nuclear weapons on board and remain on the sea floor to this day. Operator Timeline See also JASON reactor List of United States Naval reactors Naval Reactors Decommissioning of Russian nuclear-powered vessels References External links http://www.nukestrat.com/pubs/nep7.pdf - 1994 paper highlighting limited, public-relations only value of all-nuclear task groups given continued dependence on conventionally fuelled escorts and continuous replenishment of supplies Navies by type Nuclear technology
Nuclear navy
Physics
1,742
2,219,658
https://en.wikipedia.org/wiki/Check%20Point
Check Point Software Technologies Ltd. is an American-Israeli multinational provider of software and combined hardware and software products for IT security, including network security, endpoint security, cloud security, mobile security, data security and security management. History Check Point was established in Ramat Gan, Israel in 1993, by Gil Shwed (CEO ), Marius Nacht (Chairman ) and Shlomo Kramer (who left Check Point in 2003). Shwed had the initial idea for the company's core technology known as stateful inspection, which became the foundation for the company's first product, FireWall-1; soon afterwards they also developed one of the world's first VPN products, VPN-1. Shwed developed the idea while serving in the Unit 8200 of the Israel Defense Forces, where he worked on securing classified networks. Initial funding of US$250,000 was provided by venture capital fund BRM Group. In 1994 Check Point signed an OEM agreement with Sun Microsystems, followed by a distribution agreement with HP in 1995. The same year, the U.S. head office was established in Redwood City, California. By February 1996, the company was named worldwide firewall market leader by IDC, with a market share of 40 percent. In June 1996 Check Point raised $67 million from its initial public offering on NASDAQ. In 1998, Check Point established a partnership with Nokia, which bundled Check Point's Software with Nokia's computer Network Security Appliances. In 2003, a class-action lawsuit was filed against Check Point over violation of the Securities Exchange Act by failing to disclose major financial information. On 14 August 2003 Check Point opened its branch in India's capital, Delhi (with the legal name Check Point Software Technologies India Pvt. Ltd.). Eyal Desheh was the first director appointed in India. During the first decade of the 21st century Check Point started acquiring other IT security companies, including Nokia's network security business unit in 2009. In 2019, researchers at Check Point found a security breach in Xiaomi phone apps. The security flaw was reported preinstalled. Over the years many employees who worked at Check Point have left to start their own software companies. These include Shlomo Kremer, who started Imperva; Nir Zuk, who founded Palo Alto Networks; Ruvi Kitov and Reuven Harrison of Tufin; Yonadav Leitersdorf, who founded Indeni; and Avi Shua, who founded Orca Security. Critics As of December 2023, Check Point Software continues to operate in Russia, selling its cybersecurity products in the country. Despite the ongoing conflict in Ukraine, the company has maintained its office in Moscow and has faced criticism for its decision to remain active in Russia. SofaWare legal battle SofaWare Technologies was founded in 1999, as a cooperation between Check Point and SofaWare's founders, Adi Ruppin and Etay Bogner, with the purpose of extending Check Point from the enterprise market to the small business, consumer and branch office market. SofaWare's co-founder Adi Ruppin said that his company wanted to make the technology simple to use and affordable, and to lift the burden of security management from end users while adding some features. In 2001 SofaWare began selling firewall appliances under the SofaWare S-Box brand; in 2002 the company started selling the Safe@Office and Safe@Home line of security appliances, under the Check Point brand. By the fourth quarter of 2002 sales of SofaWare's Safe@Office firewall/VPN appliances had increased greatly, and SofaWare held the #1 revenue position in the worldwide firewall/VPN sub-$490 appliance market, with a 38% revenue market share. Relations between Check Point and the SofaWare founders went sour after the company acquisition in 2002. In 2004 Etay Bogner, co-founder of SofaWare, sought court approval to file a shareholder derivative suit, claiming Check Point was not transferring funds to SofaWare as required for its use of SofaWare's products and technology. His derivative suit was ultimately successful, and Check Point was ordered to pay SofaWare 13 million shekels for breach of contract. In 2006 the Tel Aviv District Court Judge ruled that Bogner SofaWare could sue Check Point by proxy for $5.1 million in alleged damage to SofaWare. Bogner claimed that Check Point, which owned 60% of Sofaware, had behaved belligerently, and withheld money due for use of SofaWare technology and products Check Point appealed the ruling, but lost. In 2009 the Israeli Supreme Court ruled that a group of founders of SofaWare, which includes Bogner, had veto power over any decision of SofaWare. The court ruled that the three founders could exercise their veto power only as a group and by majority rule. In 2011 Check Point settled all litigation relating to SofaWare. As part of the settlement it acquired the SofaWare shares held by Bogner and Ruppin, and began a process of acquiring the remaining shares, resulting in SofaWare becoming a wholly owned subsidiary. See also Economy of Israel Silicon Wadi References External links Corporate website Check Point Research Technology companies of Israel Computer hardware companies Computer security companies Computer security software companies Software companies established in 1993 Israeli brands Networking hardware companies Software companies of Israel Deep packet inspection Server appliance Companies based in San Carlos, California Software companies of the United States Companies based in Tel Aviv Companies listed on the Nasdaq 1993 establishments in Israel 1996 initial public offerings
Check Point
Technology
1,125
1,676,230
https://en.wikipedia.org/wiki/Lockstep%20%28computing%29
Lockstep systems are fault-tolerant computer systems that run the same set of operations at the same time in parallel. The redundancy (duplication) allows error detection and error correction: the output from lockstep operations can be compared to determine if there has been a fault if there are at least two systems (dual modular redundancy DMR), and the error can be automatically corrected if there are at least three systems (triple modular redundancy TMR), via majority vote. The term "lockstep" originates from army usage, where it refers to synchronized walking, in which marchers walk as closely together as physically practical. To run in lockstep, each system is set up to progress from one well-defined state to the next well-defined state. When a new set of inputs reaches the system, it processes them, generates new outputs and updates its state. This set of changes (new inputs, new outputs, new state) is considered to define that step, and must be treated as an atomic transaction; in other words, either all of it happens, or none of it happens, but not something in between. Sometimes a timeshift (delay) is set between systems, which increases the detection probability of errors induced by external influences (e.g. voltage spikes, ionizing radiation, or in situ reverse engineering). Lockstep memory Some vendors, including Intel, use the term lockstep memory to describe a multi-channel memory layout in which cache lines are distributed between two memory channels, so one half of the cache line is stored in a DIMM on the first channel, while the second half goes to a DIMM on the second channel. By combining the single error correction and double error detection (SECDED) capabilities of two ECC-enabled DIMMs in a lockstep layout, their single-device data correction (SDDC) nature can be extended into double-device data correction (DDDC), providing protection against the failure of any single memory chip. Downsides of the Intel's lockstep memory layout are the reduction of effectively usable amount of RAM (in case of a triple-channel memory layout, maximum amount of memory reduces to one third of the physically available maximum), and reduced performance of the memory subsystem. Dual modular redundancy Where the computing systems are duplicated, but both actively process each step, it is difficult to arbitrate between them if their outputs differ at the end of a step. For this reason, it is common practice to run DMR systems as "master/slave" configurations with the slave as a "hot-standby" to the master, rather than in lockstep. Since there is no advantage in having the slave unit actively process each step, a common method of working is for the master to copy its state at the end of each step's processing to the slave. Should the master fail at some point, the slave is ready to continue from the previous known good step. While either the lockstep or the DMR approach (when combined with some means of detecting errors in the master) can provide redundancy against hardware failure in the master, they do not protect against software error. If the master fails because of a software error, it is highly likely that the slave - in attempting to repeat the execution of the step which failed - will simply repeat the same error and fail in the same way, an example of a common mode failure. Triple modular redundancy Where the computing systems are triplicated, it becomes possible to treat them as "voting" systems. If one unit's output disagrees with the other two, it is detected as having failed. The matched output from the other two is treated as correct. See also Master-checker NonStop (server computers) Stratus VOS VAXft References External links Enabling Memory Reliability, Availability, and Serviceability Features on Dell PowerEdge Servers, 2005 Chipkill correct memory architecture, August 2000, by David Locklear Classes of computers Fault-tolerant computer systems
Lockstep (computing)
Technology,Engineering
834
51,453
https://en.wikipedia.org/wiki/Singular%20function
In mathematics, a real-valued function f on the interval [a, b] is said to be singular if it has the following properties: f is continuous on [a, b]. (**) there exists a set N of measure 0 such that for all x outside of N, the derivative f (x) exists and is zero; that is, the derivative of f vanishes almost everywhere. f is non-constant on [a, b]. A standard example of a singular function is the Cantor function, which is sometimes called the devil's staircase (a term also used for singular functions in general). There are, however, other functions that have been given that name. One is defined in terms of the circle map. If f(x) = 0 for all x ≤ a and f(x) = 1 for all x ≥ b, then the function can be taken to represent a cumulative distribution function for a random variable which is neither a discrete random variable (since the probability is zero for each point) nor an absolutely continuous random variable (since the probability density is zero everywhere it exists). Singular functions occur, for instance, as sequences of spatially modulated phases or structures in solids and magnets, described in a prototypical fashion by the Frenkel–Kontorova model and by the ANNNI model, as well as in some dynamical systems. Most famously, perhaps, they lie at the center of the fractional quantum Hall effect. When referring to functions with a singularity When discussing mathematical analysis in general, or more specifically real analysis or complex analysis or differential equations, it is common for a function which contains a mathematical singularity to be referred to as a 'singular function'. This is especially true when referring to functions which diverge to infinity at a point or on a boundary. For example, one might say, "1/x becomes singular at the origin, so 1/x is a singular function." Advanced techniques for working with functions that contain singularities have been developed in the subject called distributional or generalized function analysis. A weak derivative is defined that allows singular functions to be used in partial differential equations, etc. See also Absolute continuity Mathematical singularity Generalized function Distribution Minkowski's question-mark function References (**) This condition depends on the references Fractal curves Types of functions
Singular function
Mathematics
481
12,818,929
https://en.wikipedia.org/wiki/Astronomische%20Nachrichten
Astronomische Nachrichten (Astronomical Notes), one of the first international journals in the field of astronomy, was established in 1821 by the German astronomer Heinrich Christian Schumacher. It claims to be the oldest astronomical journal in the world that is still being published. The publication today specializes in articles on solar physics, extragalactic astronomy, cosmology, geophysics, and instrumentation for these fields. All articles are subject to peer review. Early history The journal was founded in 1821 by Heinrich Christian Schumacher, under the patronage of Christian VIII of Denmark, and quickly became the world's leading professional publication for the field of astronomy. Schumacher edited the journal at the Altona Observatory, then under the administration of Denmark, later part of Prussia, and today part of the German city of Hamburg. Schumacher edited the first 31 issues of the journal, from its founding in 1821 until his death in 1850. These early issues ran to hundreds of pages, and consisted mostly of letters sent by astronomers to Schumacher, reporting their observations. The journal proved to be a great success, and over the years Schumacher received thousands of letters from hundreds of contributors. The letters were published in the language in which they were submitted, mostly German, but also English, Italian and other languages. The journal's renown was acknowledged by the British astronomer John Herschel (then secretary to the Royal Astronomical Society) in a letter to the Danish King in 1840, writing that Astronomische Nachrichten was: Other astronomical journals were also founded around this time, such as the British Monthly Notices of the Royal Astronomical Society, which was founded in 1827. It was the importance of Astronomische Nachrichten, however, that led the American astronomer Benjamin A. Gould in 1850 to found The Astronomical Journal in the United States. Later history Following Schumacher's death, the interim director of the observatory and editor of the journal was , who had worked at the observatory with Schumacher for 24 years from around 1825. Petersen, who died in 1854, was later aided as editor by the Danish astronomer Thomas Clausen, who had also previously worked at the observatory. The editor from 1854 was the German astronomer Christian August Friedrich Peters, who had taken over as director of the observatory at Altona. In 1872, the observatory moved from Altona to Kiel, from where Peters continued to publish the journal until his death in 1880, aided in his final years by his son Carl Friedrich Wilhelm Peters. The journal would continue to be published in Kiel until 1938. Following Peters's death, Adalbert Krueger served as the new director of the observatory and editor of the journal from 1881 until he died in 1896. At this time the journal was the organ of the Astronomische Gesellschaft. The editor from 1896 until his death in 1907 was the German astronomer Heinrich Kreutz, who had previously assisted Krueger. Kreutz edited volumes 140 to 175. Other staff members during the period from 1880 to 1907 included the astronomers Richard Schorr and Elis Strömgren. The editor from 1907 to 1938 was the German astronomer Hermann Kobold. After Kobold retired in 1938, the journal's editorial office moved from Kiel to Berlin, and during the Second World War the journal was published by the Astronomical Calculation Institute (Heidelberg University) (Astronomisches Recheninstitut) in Berlin-Dahlem. In 1945, the institute was relocated to Heidelberg, but the journal remained in the Berlin region. After the war, Astronomische Nachrichten was edited by Hans Kienle, director of the Astrophysical Observatory of Potsdam. The observatory was in Potsdam, on the outskirts of Berlin, and from 1948 the journal was published by the publishing company Akademie-Verlag, under the auspices of the German Academy of Sciences Berlin. One of Kienle's students, (1906–1980) succeeded him as editor in 1951 and held the post for 22 years. From 1949, and officially from the 1950s until the reunification of Germany in 1990, the journal was published in the German Democratic Republic, behind the Iron Curtain. From 1974 onwards, the journal issues list a chief editor and an editorial board, and the journal was bilingual, with the same material published in German and English. Akademie-Verlag was taken over by VCH in 1990. From 1996 to the present day (from volume 317), the journal has been published by Wiley-VCH. This company was formed in 1996 when the German publishing company (founded 1921) joined John Wiley and Sons. The journal's editorial offices remain in Potsdam, at the Astrophysical Institute Potsdam, and the current editor (2007) is K. G. Strassmeier. The back catalogue of the journal includes 43,899 articles in 99,565 pages in 328 volumes, published over a period of over 180 years. Publication format and schedule Although the journal was founded in 1821, the first volume was dated 1823. Volume 1 (1823) consisted of 33 issues and a total of 516 pages. The next year, volume 2 (1824), saw 34 issues and 497 pages. Apart from the years 1830–1832, when two volumes were published in 1831 and none in 1830 or 1832, single volumes of around 20–30 issues were published each year until 1846. Then it was mostly two volumes a year until 1884. There were a record number of five volumes published in 1884. Most years from 1884 to 1914 had three or more volumes. The years 1915–1919 (coinciding with World War I) saw a dip in publication, with 1916 and 1919 only featuring one volume. From 1920 to 1940, most years saw three volumes published. Only one volume per year was published from 1941 to 1943, and the journal was not published at all from 1944 to 1946 (Berlin suffered heavy damage in the closing years of World War II). From 1947 to the present, the journal has published a volume per year in most years, but did not publish at all in some years in the 1950s, 1960s and 1970s. From 1974 to 1996, the journal was published as 6 issues a year, with each volume being 300–400 pages. Under the new publishers, Wiley, this pattern continued until 2003, at which point the number of issues per year increased to 9 due to the publication of supplementary issues. Since 2004 there have been 10 issues a year. In 2006, volume 327, there were 10 issues and 1100 pages. Editors until 1972 See also Johan Sigismund von Møsting References and footnotes External links Homepage https://www.aip.de/AN/ First editorial by Heinrich Christian Schumacher in 1823 (German Wikisource) Astronomische Nachrichten: News in Astronomy and Astrophysics 1823–1998 – backcatalogue from Wiley InterScience Astronomische Nachrichten search link from NASA's Astrophysics Data System (alternative way to access old issues) Astronomische Nachrichten entry from Journal Info Astronomy journals History of astronomy Open access journals Wiley (publisher) academic journals 10 times per year journals Publications established in 1821
Astronomische Nachrichten
Astronomy
1,447
14,703,837
https://en.wikipedia.org/wiki/Great%20White%20Brotherhood
The Great White Brotherhood, in belief systems akin to Theosophy and New Age, are said to be perfected beings of great power who spread spiritual teachings through selected humans. The members of the Brotherhood may be known as the Masters of the Ancient Wisdom, the Ascended Masters, the Church Invisible, or simply as the Hierarchy. The first person to talk about them in the West was Helena Petrovna Blavatsky (Theosophy), after she and other people claimed to have received messages from them. These included Helena Roerich, Alice A. Bailey, Guy Ballard, Geraldine Innocente (The Bridge to Freedom), Elizabeth Clare Prophet, Bob Sanders, and Benjamin Creme. History The idea of a secret organization of enlightened mystics, guiding the spiritual development of the human race, was pioneered in the late eighteenth century by Karl von Eckartshausen (1752-1803) in his book The Cloud upon the Sanctuary; Eckartshausen called this body of mystics, who remained active after their physical deaths on earth, the Council of Light. Eckartshausen's proposed communion of living and dead mystics, in turn, drew partially on Christian ideas such as the Communion of the Saints, and partially on previously circulating European ideas about secret societies of enlightened, mystical, or magic adepts typified by the Rosicrucians and the Illuminati. The Mahatma Letters began publication in 1881 with information purportedly revealed by "Koot Hoomi" to Alfred Percy Sinnett, and were also influential on the early development of the tradition. Koot Hoomi, through Sinnett, revealed that high-ranking members of mystic organizations in India and Tibet were able to maintain regular telepathic contact with one another, and thus were able to communicate to each other, and also to Sinnett, without the need for either written or oral communications, and in a manner similar to the way that spirit mediums claimed to communicate with the spirits of the dead. The letters published by Sinnett, which proposed the controversial doctrine of reincarnation, were said to have been revealed through this means. Eckartshausen's idea was expanded in the teachings of Helena P. Blavatsky as developed by Charles W. Leadbeater, Alice Bailey and Helena Roerich. Blavatsky, founder of the Theosophical Society, attributed her teachings to just such a body of adepts; in her 1877 book Isis Unveiled, she called the revealers of her teachings the "Masters of the Hidden Brotherhood" or the "Mahatmas". Blavatsky claimed that she had made physical contact with these adepts' earthly representatives in Tibet; but also, that she continued to receive teachings from them through psychic channels, through her abilities of spirit mediumship. Ideas about this secret council of sages, under several names, were a widely shared feature of late nineteenth-century and early twentieth-century esotericism. Arthur Edward Waite, in his 1898 Book of Black Magic and of Pacts, hinted at the existence of a secret group of initiates who dispense truth and wisdom to the worthy. (Symonds, John and Grant, Kenneth, eds.), The actual phrase "Great White Brotherhood" was used extensively in Leadbeater's 1925 book The Masters and the Path. Alice A. Bailey also claimed to have received numerous revelations from the Great White Brotherhood between 1920 and 1949, which are compiled in her books known collectively by her followers as the Alice A. Bailey Material. Since the introduction of the phrase, the term "Great White Brotherhood" is in some circles used generically to refer to any concept of an enlightened community of adepts, on Earth or in the hereafter, with benevolent aims toward the spiritual development of the human race, and without strict regard to the names used within the tradition. Dion Fortune adopts the name to refer to the community of living and dead adepts. The ritual magicians of the Western mystery tradition sometimes refer to the Great White Brotherhood as the "Great White Lodge", a name that appears to indicate that they imagine it constitutes an initiatory hierarchy similar to Freemasonry. Gareth Knight describes its members as the "Masters" or "Inner Plane Adepti", who have "gained all the experience, and all the wisdom resulting from experience, necessary for their spiritual evolution in the worlds of form." While some go on to "higher evolution in other spheres", others become teaching Masters who stay behind to help younger initiates in their "cyclic evolution on this planet". Only a few of this community are known to the human race; these initiates are the "teaching Masters". The AMORC Rosicrucian order maintains a difference between the "Great White Brotherhood" and the "Great White Lodge", saying that the Great White Brotherhood is the "school or fraternity" of the Great White Lodge, and that "every true student on the Path" aspires to membership in this Brotherhood. Bulgarian Gnostic master Peter Deunov referred to his organization of followers as the Universal White Brotherhood, and it is clear that he too was referring to the Western esoteric community-at-large. When ex-communicated as a heretic on 7 July 1922, he defended the Brotherhood as follows: ‘Let the Orthodox Church resolve this issue, whether Christ has risen, whether Love is accepted in the Orthodox Church. There is one church in the world. But the Universal White Brotherhood is outside the church - it is higher than the church. But even higher than the Universal White Brotherhood is the Kingdom of Heaven. Hence the Church is the first step, the Universal White Brotherhood is the second step, and the Kingdom of Heaven is the third step - the greatest one that is to be manifested.’ (24 June 1923). Similarly, Bulgarian teacher Omraam Mikhaël Aïvanhov (Deunov's principal disciple) formally established Fraternité Blanche Universelle as an "exoteric" esoteric organization still operating today in Switzerland, Canada, the USA, the UK and parts of Scandinavia. The term Great White Brotherhood was further developed and popularized in 1934 with the publication of "Unveiled Mysteries" by Guy Ballard's "I AM" Activity. This Brotherhood of "Immortal Saints and Sages" who have gone through the Initiations of the Transfiguration, Resurrection, and the Ascension was further popularized by Ascended Master Teachings developed by The Bridge to Freedom, The Summit Lighthouse and the Church Universal and Triumphant, and The Temple of The Presence. Benjamin Creme has published books — he claims the information within them has been telepathically transmitted to him from the Great White Brotherhood. Founding of the Great White Brotherhood In 1952, Geraldine Innocente, messenger for The Bridge to Freedom, delivered this address purported to be from Sanat Kumara describing the founding of the "Great White Brotherhood": " . . . I had nothing to work with but Light and Love, and many centuries passed before even two lifestreams applied for membership - One, later became Buddha (now, Lord of the World, the Planetary Logos Gautama Buddha) and the Other, became the Cosmic Christ (Lord Maitreya, now the Planetary Buddha). The Brotherhood has grown through these ages and centuries until almost all the offices are held now by those belonging to the evolution of Earth and those who have volunteered to remain among her evolution. . .." Members of The Bridge to Freedom believe that on July 4, 1954 Sanat Kumara stated through Geraldine Innocente: " . . . Thus We took Our abode upon the sweet Earth. Through the same power of centripetal and centrifugal force of which I spoke (cohesion and expansion of the magnetic power of Divine Love), We then began to magnetize the Flame in the hearts of some of the Guardian Spirits who were not sleeping so soundly and who were not too enthusiastically engaged in using primal life for the satisfaction of the personal self. "In this way, the Great White Brotherhood began. The Three-fold Flame within the heart of Shamballa, within the Hearts of the Kumaras and Myself, formed the magnetic Heart of the Great White Brotherhood by Whom you have all been blessed and of which Brotherhood you all aspire to become conscious members. . . . " Great Brotherhood of Light The Great White Brotherhood, also known as Great Brotherhood of Light or the Spiritual Hierarchy of Earth, is perceived as a spiritual organization composed of those Ascended Masters who have risen from the Earth into immortality, but still maintain an active watch over the world. C.W. Leadbeater said "The Great White Brotherhood also includes members of the Heavenly Host (the Spiritual Hierarchy directly concerned with the evolution of our world), Beneficent Members from other planets that are interested in our welfare, as well as certain unascended chelas". The Masters of the Ancient Wisdom are believed by Theosophists to be joined together in service to the Earth under the name of the Great White Brotherhood. The use of the term "white" refers to their use of white magic, as opposed to black, and is unrelated to race besides common psychological relation and its implications. The later versions of Blavatsky described the masters as ethnically Tibetan or Indian (Hindu), not European. Recent skeptical research indicates, however, that this description was used by Blavatsky to hide the real identity of her teachers, some of whom are said to have really been well known Indian rulers or personalities of her time. Most occult groups assign a high level of importance to the Great White Brotherhood, but some make interaction with the Ascended Masters of the Brotherhood a major focus of their existence. Of these several, the most prominent are the "I Am" Activity, founded in the 1930s, The Bridge to Freedom, the Church Universal and Triumphant, and The Temple of The Presence. Belief in the Brotherhood and the Masters is an essential part of the syncretistic teachings of various organizations that have continued and expanded the Theosophical philosophical concepts. Information given by the Summit Lighthouse and the I AM movement is suspect, since none of the writers of these groups are Masters of any Brotherhood. Some examples of those believed to be Ascended Masters would be, according to different unconfirmed sources, the following: Master Jesus, Confucius, Gautama Buddha, Mary the Mother of Jesus, Hilarion, Enoch, Paul the Venetian, Kwan Yin, Saint Germain, and Kuthumi. These sources say that all these peoples put aside any differences they might have had in their Earthly careers, and unite instead to advance the spiritual well-being of humanity. Agni Yoga The Great White Brotherhood is the name given in some metaphysical/occult circles to adepts of wisdom in or out of earthly incarnation who have assumed responsibility for the cosmic destiny of the human race, both individually and collectively. Nicholas Roerich and his wife, Helena Roerich, inspired by the Theosophical writings of H.P. Blavatsky, published the "Agni Yoga" series of books. Their contents, claimed to be inspired by the Master Morya, described the work of the White Brotherhood and the Spiritual Hierarchy. See also Ascended masters Bodhisattva Communion of Saints Masters of the Ancient Wisdom Secret Chiefs Marina Tsvigun (Maria Devi Khristos) of the Ukrainian White Brotherhood Notes External links The Great White Brotherhood - website for books and messages from The Great White Brotherhood The Stairway To Freedom - website for The Stairway To Freedom book dictated by The Great White Brotherhood Ascended Master Teachings New religious movements Spiritual evolution Theosophical philosophical concepts
Great White Brotherhood
Biology
2,372
8,942,272
https://en.wikipedia.org/wiki/Horizontal%20resistance
In genetics, the term horizontal resistance was first used by J. E. Vanderplank to describe many-gene resistance, which is sometimes also called generalized resistance. This contrasts with the term vertical resistance which was used to describe single-gene resistance. Raoul A. Robinson further refined the definition of horizontal resistance. Unlike vertical resistance and parasitic ability, horizontal resistance and horizontal parasitic ability are entirely independent of each other in genetic terms. In the first round of breeding for horizontal resistance, plants are exposed to pathogens and selected for partial resistance. Those with no resistance die, and plants unaffected by the pathogen have vertical resistance and are removed. The remaining plants have partial resistance and their seed is stored and bred back up to sufficient volume for further testing. The hope is that in these remaining plants are multiple types of partial-resistance genes, and by crossbreeding this pool back on itself, multiple partial resistance genes will come together and provide resistance to a larger variety of pathogens. Successive rounds of breeding for horizontal resistance proceed in a more traditional fashion, selecting plants for disease resistance as measured by yield. These plants are exposed to native regional pathogens, and given minimal assistance in fighting them. References Phytopathology Molecular biology
Horizontal resistance
Chemistry,Biology
248