id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
24,003,318
https://en.wikipedia.org/wiki/C7H13N
{{DISPLAYTITLE:C7H13N}} The molecular formula C7H13N (molar mass: 111.18 g/mol, exact mass: 111.1048 u) may refer to: Pyrrolizidine Quinuclidine Molecular formulas
C7H13N
Physics,Chemistry
62
21,428,515
https://en.wikipedia.org/wiki/Natasha%20Tsakos
Natasha Tsakos is a conceptual director, interactive designer, and performance artist from Geneva, Switzerland, living in Florida. Her work explores the symbiosis of technology and live performance. She is the president and founder of NTiD inc. Career Tsakos is a classically trained actor, playwright and director. She has written 12 original works, directed 30 plays, 6 commercial-length films for Ford Motors as part of the Fiesta Movement, and 2 music videos. Her work has been commissioned by Nickelodeon, Miami Art Museum, Miami Light Project, Adrienne Arsht Center, Art Basel, Discovery Channel, and featured on HBO, MTV, and BBC. Tsakos was voted one of the State of Florida's "Power Players" by Florida International Magazine in 2008. and named one of Miami's top 100 creative people by Miami New Times in 2011. She was a member of Octavio Campos' Hybrid Theater company Camposition, and the principal character in its anti-bullying film Intention Intervention; She was also the lead performer in the roving interactive entertainment troupe Circ X, appearing in more than 10,000 events with two US national tours. Tsakos performed with Cirque du Soleil for the Super Bowl's opening ceremony in 2007 and at the Coachella Music and Arts Festival with Red Bull. Technoformances Tsakos' original creations have been referred to as "technoformances", which combine virtual technology, electronic music, and movement studies. The art form largely took off after her show UP WAKE, which first appeared in 2002 as a short-form dance theatre performance and was adapted into a full-length production in 2006. UP WAKE inaugurated the Grand Opening of the Adrienne Arsht Center, formerly known as Miami Performing Arts Center, and toured globally. In the performance, Tsakos interacted with live 3D animation as her character Zero journeyed through a day of dream and wake, unable to distinguish the two. UP WAKE was named best dance performance of the year in Florida by The Sunpost; which wrote "with startling use of animation, a magical briefcase, and a movement vocabulary that ranged from MTV to avant-garde, Tsakos achieved the rarest of feats: she made a profound statement on the way we live now in an utterly original language." A year prior to the launch of Project Natal, better known as Xbox' Kinect, Tsakos was brought on as a creative developer to work in collaboration with Digital Worlds' research department at the University of Florida to develop a new form of sensor-less motion capture technology. The preliminary phases premiered at the eComm Conference in San Francisco in April 2010 and SIGGRAPH 2010. A year later she produced CLIMAX, a provocative multimedia show about the environment for EcoArt Fashion Week during Art Basel. CLIMAX closed Al Gore's Climate Reality Project in 2011 and opened the G20 Summit, held in Los Cabos, Mexico in 2012. Tsakos was commissioned again in 2012 by Miami Light Project to create a new show for its "Here and Now Festival" series. According to the Miami Herald, the technology for OMEN included "3D mapping technology in which projected animation transforms flat surfaces into fantastical yet realistic looking images". Tsakos described the phenomenon of data visualization and projection mapping applied to theatrical productions, as the "data-tainment movement". In 2013, the Governor's School for the Arts commissioned Tsakos for its 25th anniversary to create a production that included its 355 music, dance, performing and visual arts students. The result was a full-length interdisciplinary show called ZO, following artists as they pursue their dreams for stardom by auditioning for an insane talent show. ZO premiered at the Chrysler Hall in Norfolk, Virginia. That same year, Tsakos was named a finalist in the Arts category for the World Technology Awards. Tsakos later collaborated with Mexican media agency Circus to create the show QUARRY; a performance focused around the concept of how concrete is made, from the natural forces to the industrialization process involved. QUARRY was presented at Centro Banamex in Mexico City and incorporated a 200-foot screen set behind 15 foot cylinders, IMAG projections, and 15 performers dancing to projection mapped animation. In 2013, Tsakos worked with the Discovery Channel to produce two theatrical pieces; SUPER INTENSO! and SUPER WOMAN which premiered at the Gotham Hall in New York City. SUPER INTENSO! later celebrated Discovery Brazil's 20th anniversary in São Paulo in 2014. Tsakos opened the Tribeca Film Festival' Imagination Talks with her piece "FACE FORWARD: A MANIFESTO FOR THE FUTURE" and was the official host for the all-day summit, presenting some of today's most influential minds including Google's Captain of Moonshots, Astro Teller; the founding director of Stanford University's Virtual Human Interaction Lab, Jeremy Bailenson and Chief Entrepreneur Officer at 3D Systems, Ping Fu. Tsakos worked in Havana, Cuba between 2014 and 2015 developing an original conception with renown Cuban singer Isaac Delgado and choreographer Santiago Alfonso, but the show never went into full production. Public speaking Tsakos has spoken at conferences such as the TED Conference in Long Beach, California, the 2009 International Symposium on Mixed and Augmented Reality, 2010 National Innovation Conference, La Ciudad de Las Ideas in 2009, 2010, and 2016, The Penny Stamp Speaker Series at the University of Michigan, the 2013 National Theatre Conference, TEDxBroadway 2014, TEDxPuraVidaJoven 2014, SIME Stockholm 2014, TEDx San Diego 2015, the 2016 NOVUS Summit at the United Nations General Assembly Hall, Google, Kellogg Innovation Network, YPO, IBM Summit, amongst others. Education Tsakos graduated in 2000 with a BFA in Theatre from New World School of the Arts at the University of Florida. She was honored in 2016 with the Alumni Recognition Award, the first such award in New World School of the Arts history. In 2015, Tsakos won Singularity University's Global Impact Challenge competition, and was awarded a Google grant to attend SU's Graduate Studies Program at NASA Research Park in Silicon Valley. The program focuses on accelerating technologies to address Humanity's greatest challenges. In 2017, Tsakos became the ambassador of Singularity University's Miami Chapter. Bibliography The Lost Architect (2013) Colours: a nonsense The first edition (2013) Colours: a nonsense The second edition (2014) References Living people Swiss performance artists Artists from Florida Year of birth missing (living people) University of Florida alumni Swiss voice actresses 20th-century Swiss women artists 21st-century Swiss women artists Artists from Geneva Swiss theatre directors Swiss women theatre directors 21st-century Swiss actresses Women performance artists Multimedia artists Women multimedia artists 21st-century Swiss artists
Natasha Tsakos
Technology
1,376
754,851
https://en.wikipedia.org/wiki/One-parameter%20group
In mathematics, a one-parameter group or one-parameter subgroup usually means a continuous group homomorphism from the real line (as an additive group) to some other topological group . If is injective then , the image, will be a subgroup of that is isomorphic to as an additive group. One-parameter groups were introduced by Sophus Lie in 1893 to define infinitesimal transformations. According to Lie, an infinitesimal transformation is an infinitely small transformation of the one-parameter group that it generates. It is these infinitesimal transformations that generate a Lie algebra that is used to describe a Lie group of any dimension. The action of a one-parameter group on a set is known as a flow. A smooth vector field on a manifold, at a point, induces a local flow - a one parameter group of local diffeomorphisms, sending points along integral curves of the vector field. The local flow of a vector field is used to define the Lie derivative of tensor fields along the vector field. Definition A curve is called one-parameter subgroup of if it satisfies the condition . Examples In Lie theory, one-parameter groups correspond to one-dimensional subspaces of the associated Lie algebra. The Lie group–Lie algebra correspondence is the basis of a science begun by Sophus Lie in the 1890s. Another important case is seen in functional analysis, with being the group of unitary operators on a Hilbert space. See Stone's theorem on one-parameter unitary groups. In his monograph Lie Groups, P. M. Cohn gave the following theorem: Any connected 1-dimensional Lie group is analytically isomorphic either to the additive group of real numbers , or to , the additive group of real numbers . In particular, every 1-dimensional Lie group is locally isomorphic to . Physics In physics, one-parameter groups describe dynamical systems. Furthermore, whenever a system of physical laws admits a one-parameter group of differentiable symmetries, then there is a conserved quantity, by Noether's theorem. In the study of spacetime the use of the unit hyperbola to calibrate spatio-temporal measurements has become common since Hermann Minkowski discussed it in 1908. The principle of relativity was reduced to arbitrariness of which diameter of the unit hyperbola was used to determine a world-line. Using the parametrization of the hyperbola with hyperbolic angle, the theory of special relativity provided a calculus of relative motion with the one-parameter group indexed by rapidity. The rapidity replaces the velocity in kinematics and dynamics of relativity theory. Since rapidity is unbounded, the one-parameter group it stands upon is non-compact. The rapidity concept was introduced by E.T. Whittaker in 1910, and named by Alfred Robb the next year. The rapidity parameter amounts to the length of a hyperbolic versor, a concept of the nineteenth century. Mathematical physicists James Cockle, William Kingdon Clifford, and Alexander Macfarlane had all employed in their writings an equivalent mapping of the Cartesian plane by operator , where is the hyperbolic angle and . In GL(n,C) An important example in the theory of Lie groups arises when is taken to be , the group of invertible matrices with complex entries. In that case, a basic result is the following: Theorem: Suppose is a one-parameter group. Then there exists a unique matrix such that for all . It follows from this result that is differentiable, even though this was not an assumption of the theorem. The matrix can then be recovered from as . This result can be used, for example, to show that any continuous homomorphism between matrix Lie groups is smooth. Topology A technical complication is that as a subspace of may carry a topology that is coarser than that on ; this may happen in cases where is injective. Think for example of the case where is a torus , and is constructed by winding a straight line round at an irrational slope. In that case the induced topology may not be the standard one of the real line. See also Integral curve One-parameter semigroup Noether's theorem References . Lie groups 1 (number) Topological groups
One-parameter group
Mathematics
867
31,407,915
https://en.wikipedia.org/wiki/Sexual%20dimorphism%20in%20dinosaurs
Sexual dimorphism in dinosaurs refers to the different physical characteristics of male and female dinosaurs of the same species. This means that the male and female dinosaurs of a species may differ in size, color, shape, or they may even look like a completely different species altogether, such as in the case of the anglerfish. These differing physical characteristics can also be the deciding factor for choosing a mate or can be helpful for blending into the surrounding environment. Researching sexual dimorphism in extinct dinosaurs can be extremely difficult because suitable tissue and skeletal samples are required for testing, and most fossils and other samples have been damaged by decomposition and fossilization. Sexual dimorphism and dinosaurs Examining fossils of dinosaurs in search of sexually dimorphic characteristics requires the supply of complete and articulated skeletal and tissue remains. As terrestrial organisms, dinosaur carcasses are subject to ecological and geographical influence that inevitably constitutes the degree of preservation. The availability of well-preserved remains is not a probable outcome as a consequence of decomposition and fossilization. Some paleontologists have looked for sexual dimorphism among dinosaurs using statistics and comparison to ecologically or phylogenetically related modern animals. Examples of sexual dimorphism in dinosaurs The following are summarized academic researches conducted by palaeontologists, Roy Chapman and Paul Penkalski. Although these studies aren't conclusive in providing factual information, they do provide an insightful perspective. Apatosaurus and Diplodocus Female Apatosaurus and Diplodocus had interconnected caudal vertebrae that allowed them to keep their tails elevated to aid in copulation. Discovering that this fusion occurred in only 50% of Apatosaurus and Diplodocus skeletons and 25% of Camarasaurus skeletons indicated that this is a sexually dimorphic trait. Theropoda It has been hypothesized that male theropods possessed a retractable penis, a feature similar to modern day crocodilians. Crocodilian skeletons were examined to determine whether there is a skeletal component that is distinctive between both sexes, to help provide an insight on the physical disparities between male and female theropods. Findings revealed the caudal chevrons of male crocodiles, used to anchor the penis muscles, were significantly larger than those of females. There have been criticisms of these findings, but it remains a subject of debate among advocates and adversaries. Ornithopoda Studies of sexual dimorphism in hadrosaurs have generally centered on the distinctive cranial crests, which likely provided a function in sexual display. A biometric study of 36 skulls found sexual dimorphism was exhibited in the crest of 3 species of hadrosaurids. The crests could be categorized as full (male) or narrow (female) and may have given some advantage in intrasexual mating-competition. Ceratopsians According to Scott D. Sampson, if ceratopsids were to exhibit sexual dimorphism, modern ecological analogues suggest it would be found in display structures, such as horns and frills. No convincing evidence for sexual dimorphism in body size or mating signals is known in ceratopsids, although there is evidence that the more primitive ceratopsian Protoceratops andrewsi possessed sexes that were distinguishable based on frill and nasal prominence size. This is consistent with other known tetrapod groups where midsized animals tend to exhibit markedly more sexual dimorphism than larger ones. However, it has been proposed that these differences can be better explained by intraspecific and ontogenic variation rather than sexual dimorphism. In addition, many sexually dimorphic traits that may have existed in ceratopsians include soft tissue variations such as coloration or dewlaps, which would be unlikely to have been preserved in the fossil record. Stegosaurians A 2015 study on specimens of Hesperosaurus mjosi found evidence of sexual dimorphism in the shape of the dermal plates. Two plate morphs were described: one was short, wide, and oval-shaped, the other taller and narrower. Footnotes References Sampson, S.D. (2001). "Speculations on the socioecology of Ceratopsid dinosaurs (Orinthischia: Neoceratopsia)". In: Mesozoic Vertebrate Life, edited by Tanke, D. H., and Carpenter, K., Indiana University Press, pp. 263–276. Dinosaur paleobiology Sexual dimorphism
Sexual dimorphism in dinosaurs
Physics,Biology
930
77,531,717
https://en.wikipedia.org/wiki/Benzgalantamine
Benzgalantamine, sold under the brand name Zunveyl, is a medication used for the treatment of mild to moderate dementia of the Alzheimer's type. It is a cholinesterase inhibitor. Benzgalantamine is a prodrug of galantamine. The most common side effects include nausea, vomiting, diarrhea, dizziness, headache, and decreased appetite. Benzgalantamine was approved for medical use in the United States in July 2024. Medical uses Benzgalantamine is indicated for the treatment of mild to moderate dementia of the Alzheimer's type in adults. Side effects The most common side effects include nausea, vomiting, diarrhea, dizziness, headache, and decreased appetite. Society and culture Legal status Benzgalantamine was approved for medical use in the United States in July 2024. Names Benzgalantamine is the international nonproprietary name. References External links Treatment of Alzheimer's disease Benzoate esters Prodrugs
Benzgalantamine
Chemistry
206
12,443,159
https://en.wikipedia.org/wiki/Comprehensive%20planning
Comprehensive planning is an ordered process that determines community goals and aspirations in terms of community development. The end product is called a comprehensive plan, also known as a general plan, or master plan. This resulting document expresses and regulates public policies on transportation, utilities, land use, recreation, and housing. Comprehensive plans typically encompass large geographical areas, a broad range of topics, and cover a long-term time horizon. The term comprehensive plan is most often used by urban planners in the United States. Each city and county adopts and updates their plan to guide the growth and land development of their community, for both the current period and the long term. This "serious document" is then the foundation for establishing goals, purposes, zoning and activities allowed on each land parcel to provide compatibility and continuity to the entire region as well as each individual neighborhood. It has been one of the most important instruments in city and regional planning since the early twentieth century. History During the earliest times of American history, cities had little power given to them by State governments to control land use. After the American Revolution, the focus on property rights turned to self-rule and personal freedom, as this was a time of very strong personal property rights. Local governments had simple powers which included maintaining law and order and providing basic services. Cities had little power, if any at all, to direct development in the city. Cities began to focus on the provision of basic services during the 1840s at a time known as the Sanitary Reform Movement. During this time it became clear that there was a strong relationship between disease and the availability of a quality sewer system. Part of the movement included the development of sanitary survey planning to help bring sewer systems to infected parts of cities. From this planning also developed a new consciousness of townsite location. People began to understand the environmental and social impacts of building cities and developed ways in which to further lower the spread of deadly diseases. Frederick Law Olmsted was a firm believer in the relationship between the physical environment and sanitation, which helped lead to the development of grand parks and open spaces in communities to bring not only recreation, but sanitation as well. The Sanitary Reform Movement is seen by many as the first attempt at comprehensive planning, however it failed to be completely comprehensive because it focused on only one aspect of the city and did not consider the city as a whole. During the nineteenth and twentieth centuries, cities began to urbanize at very high rates. Cities became very dense and full of disease. As a response to the overpopulation and chaotic conditions, planning became a major focus of many large American cities. The City Beautiful movement was one of the many responses to the decaying city. The movement began in Chicago in 1890 with the World's Columbian Exposition of 1893 and lasted until about the 1920s. The focus on the movement was the design and architectural characteristics of the city. Leaders of the movement wanted to push the vision of the ideal city, and demonstrate to the world what cities could look like if they were created to be works of art. The White City was created for the exposition which embodied the visions of the movement with neoclassical designed buildings set against landscaped streets. Visitors to the exhibition began to realize that cities could be much more than dirty, overcrowded places. The movement spread across the United States and influenced many major American cities. In 1898, Ebenezer Howard published his book entitled "Tomorrow, a Peaceful Path to Reform," in which he developed the idea of a Garden City. This city was a planned development which included different land uses and community services. The communities were to be surrounded by a green belt and included many open spaces and parks within the city. These cities were designed to be completely self-sufficient and focused on decreasing the negative impacts traditional cities had on people's lives. Although these cities were considered to be utopian ideas, two cities were eventually built in this vision, Letchworth and Welwyn, England. The vision of Ebenezer Howard greatly impacted the idea of city planning in the United States for decades and helped in the development of the idea that cities must be planning comprehensively for growth. After the turn of the twentieth century, American cities began to see the need for local development and growth plans. Influential in this planning was Daniel Hudson Burnham who re-created the city plan for Washington, D.C. created by Pierre Charles L'Enfant in 1791. The original plan called for grid iron laid streets crossed by diagonal boulevards, squares, plazas, parks, monuments, and sculptures. Over time this plan was largely ignored and the city had developed against L'Enfant's vision. Burnham was instrumental in recreating the city plan and helping to return the city to its once intended form. In 1903, Burnham helped create the city growth plan for the city of Cleveland, Ohio and in 1906 he created the city plan for San Francisco, California. Although these were all city development plans, it was not until 1909 when Burnham created the city plan for Chicago that his plans were comprehensive. The plan of Chicago is known today as the first comprehensive plan and it began a movement of comprehensive planning that emphasized planning as a way to not only make cities more beautiful, but to function better as well. Purpose A comprehensive plan has significant benefits for a whole community as it helps to identify, define and protect important existing resources while also providing a blueprint for future growth that ensures equity and resilience for all stakeholders. Such a plan provides for common goals and community consensus as opposed to "spot zoning". A comprehensive plan may address but is not limited to the following considerations Existing and proposed land uses, and their intensity Impact on neighborhood character Equity Resilience and/or sustainability Protection of historical resources, cultural resources, natural resources, coastal resources and sensitive environmental areas (and agricultural resources if applicable) Population, demographic and socioeconomic trends Traffic and public transportation Utilities and infrastructure Housing resources and needs Economic development and tourism Comprehensive planning process Comprehensive Planning typically follows a process that may consist of, but is not limited to a certain number of steps and assessments. By following this process, planners are able to determine a wide range of interconnecting issues that affect an urban area. Each step can be seen as interdependent and many times planners will revise the order to best fit their needs and wants. Identifying issues The planner must first address the issue they are investigating. "To be relevant, the planning process must identify and address not only contemporary issues of concern to residents, workers, property owners, and business people, but also the emerging issues that will be important in the future." Generally, planners determine community issues by involving various community leaders, community organizations, and ordinary citizens. Stating goals Once issues have been identified by a community, goals can then be established. Goals are community visions. They establish priorities for communities and help community leaders make future decisions which will affect the city. Stating goals is not always an easy process and it requires the active participation of all people in the community. Collecting data Data is needed in the planning process in order to evaluate current city conditions as well as to predict future conditions. Data is most easily collected from the United States Census Bureau, however many communities actively collect their own data. The most typical data collected for a comprehensive plan include data about the environment, traffic conditions, economic conditions, social conditions (such as population and income), public services and utilities, and land use conditions (such as housing and zoning). Once this data is collected it is analyzed and studied. Outcomes of the data collection process include population projections, economic condition forecasts, and future housing needs. Preparing the plan The plan is prepared using the information gathered during the data collection and goal setting stages. A typical comprehensive plan begins by giving a brief background of the current and future conditions found in the data collection step. Following the background information are the community goals and the plans that will be used in order to implement those goals into the community. Plans may also contain separate sections for important issues such as transportation or housing which follow the same standard format. Creating implementation plans During this stage of the process different programs are thought of in order to implement the goals of the plan. These plans focus on issues such as cost and effectiveness. It is possible that a variety of plans will result from this process in order to realize one goal. These different plans are known as alternatives. Evaluating alternatives Each alternative should be evaluated by community leaders to ensure the most efficient and cost-effective way to realize the community's goals. During this stage each alternative should be weighed given its potential positive and negative effects, impacts on the community, and impacts on the city government. One alternative should be chosen that best meets the needs and desires of the community and community leaders for meeting the community goals. Adopting a plan The community needs to adopt the plan as an official statement of policy in order for it to take effect. This is usually done by the City Council and through public hearings. The City Council may choose not to adopt the plan, which would require planners to refine the work they did during previous steps. Once the plan is accepted by city officials it is then a legal statement of community policy in regards to future development. Implementing and monitoring the plan Using the implementation plans defined in the earlier stages, the city will carry out the goals in the comprehensive plan. City planning staff monitor the outcomes of the plan and may propose future changes if the results are not desired. A comprehensive plan is not a permanent document. It can be changed and rewritten over time. For many fast growing communities, it is necessary to revise or update the comprehensive plan every five to ten years. In order for the comprehensive plan to be relevant to the community it must remain current. Legal basis The basis for comprehensive planning comes from the government's duty and right to protect the health and welfare of its citizens. The power for local governments to plan generally comes from state planning enabling legislation; however, local governments in most states are not required by law to engage in comprehensive planning. State statutes usually provide the legal framework necessary for those communities choosing to participate while allowing others to disengage themselves with the process. The legal provision for comprehensive planning comes from what is called the Standard State Zoning Enabling Act which was written by the United States Department of Commerce in the 1920s. This act was never passed by the United States Congress but was rather a law written for state legislatures to willingly adopt. Many states did choose to adopt the act which provided local governments with the framework to engage in land use planning. Because the act never gave a clear definition for comprehensive planning, the Department of Commerce wrote another act, the Standard City Planning Enabling Act of 1928, which defined more precisely what a comprehensive plan is and how it should be used. In states that do not require local governments to plan comprehensively, state governments usually provide many incentives to encourage the process at the local level. In Georgia, for example, the state government gives many incentives to local governments to establish comprehensive plans to guide development. Today, almost every county in Georgia has established a plan voluntarily. However, a comprehensive plan is not usually legally binding. A community's ordinances must be amended in order to legally implement the provisions required to execute the comprehensive plan. By country United States In California the General Plan (also known as a comprehensive plan in other states) is a document providing a long-range plan for a city’s physical development. Local jurisdictions have freedom as to what their general plans include, however there are certain requirements under California state law that each general plan must meet; failure to do so could result in suspension of future development. Each general plan must include the vision, goals, and objectives of the city or county in terms of planning and development within eight different “elements” defined by the state as: land use, housing, circulation, conservation, noise, safety, open space, and environmental justice (added as an official element in 2016). Green General Plans Local governments are continually implementing green measures into their general plans to promote community-wide sustainable practices. Introducing green elements and environmental resource elements can help local governments reach goals by lowering greenhouse gas emissions, reducing waste, improving energy and water efficiency and complying with state and nationwide standards such as California’s Global Warming Solutions Act of 2006. Canada In Canada, comprehensive planning is generally known as strategic planning or visioning. It is usually accompanied by public consultation. When cities and municipalities engage in comprehensive planning the resulting document is known as an Official Community Plan or OCP for short. (In Alberta, the resultant document is referred to as a Municipal Development Plan. Iran The city council of Isfahan, Iran, has a strategy council, which has developed its final program. See also Development plan Traffic Urban planning; also covers rural planning Urbanization Zoning References Citations Sources Campbell, Scott and Fainstein, Susan. (2003) "Readings in Planning Theory". Malden, Ma: Blackwell Publishing Juergensmeyer, Julian and Roberts, Thomas. (2003) "Land Use Planning and Development Regulation Law". St. Paul: West Group Urban studies and planning terminology Urban planning in the United States Urban planning
Comprehensive planning
Engineering
2,650
40,894
https://en.wikipedia.org/wiki/Coherence%20time
For an electromagnetic wave, the coherence time is the time over which a propagating wave (especially a laser or maser beam) may be considered coherent, meaning that its phase is, on average, predictable. In long-distance transmission systems, the coherence time may be reduced by propagation factors such as dispersion, scattering, and diffraction. The coherence time, usually designated , is calculated by dividing the coherence length by the phase velocity of light in a medium; approximately given by where is the central wavelength of the source, and is the spectral width of the source in units of frequency and wavelength respectively, and is the speed of light in vacuum. A single mode fiber laser has a linewidth of a few kHz, corresponding to a coherence time of a few hundred microseconds. Hydrogen masers have linewidth around 1 Hz, corresponding to a coherence time of about one second. Their coherence length approximately corresponds to the distance from the Earth to the Moon. As of 2022, research groups worldwide have demonstrated superconducting qubits with coherence times up to several 100 μs. See also Atomic coherence Temporal coherence Degree of coherence References Electromagnetic radiation Physical optics Radio frequency propagation Optical quantities
Coherence time
Physics,Mathematics
270
103,061
https://en.wikipedia.org/wiki/A%20grain%20of%20salt
To take something with a "grain of salt" or "pinch of salt" is an English idiom that suggests to view something, specifically claims that may be misleading or unverified, with skepticism or not to interpret something literally. In the old-fashioned English units of weight, a grain weighs approximately 65 mg, which is about how much table salt a person might pick up between the fingers as a pinch. History The phrase is thought to come from Pliny the Elder's Naturalis Historia, regarding the discovery of a recipe written by the Pontic king Mithridates to make someone immune to poison. One of the ingredients in the recipe was a grain of salt. Threats involving poison were thus to be taken "with a grain of salt", and therefore less seriously. The phrase ("with a grain of salt") is not what Pliny wrote. It is constructed according to the grammar of modern European languages rather than Classical Latin. Pliny's actual words were ("after having added a grain of salt"). The Latin word ( is the genitive) means both "salt" and "wit", thus the Latin phrase could be translated to either "with a grain of salt" or "with a grain of wit", actually to "with caution"/cautiously. The phrase is typically said "with a pinch of salt" in British English and said "with a grain of salt" in American English. References External links Edible salt English-language idioms English phrases Mithridates VI Eupator
A grain of salt
Chemistry
312
78,700,040
https://en.wikipedia.org/wiki/Consalazinic%20acid
Consalazinic acid is a chemical compound with the molecular formula . It is classified as a depsidone and is a secondary metabolite produced by a variety of lichens. Consalazinic acid was first isolated from Parmotrema subisidiosum and described in 1980. It has since been identified in many other lichens. References Lactones Lichen products Polyphenols Heterocyclic compounds with 4 rings Benzodioxepines Benzofurans
Consalazinic acid
Chemistry
104
47,318,237
https://en.wikipedia.org/wiki/Action%20Center
Action Center is a notification center included with Windows Phone 8.1, Windows 10 and Windows 10 Mobile. It was introduced with Windows Phone 8.1 in July 2014, and was introduced to the desktop with the launch of Windows 10 on July 29, 2015. The Action Center replaces the charms in Windows 10. The Action Center was replaced with Quick Settings and the Notification Center in Windows 11. Features Action Center allows for four quick settings, and in Windows 10 users can expand the view to show all of the quick settings. Notifications are sorted into categories by app, and users can swipe right to clear notifications. Action Center also supports actionable notifications starting with Windows 10. In the mobile version, the user can swipe from the top to the bottom to invoke Action Center, and further features introduced in Windows Phone 8.1 include the ability to change simple settings such as volume controls. The new notifications area's design allows the user to for example change wireless networks, turn Bluetooth and Airplane Mode on or off, and access "Driving Mode" from four customisable boxes at the top of the screen, while beneath these four horizontally placed boxes include recent text messages and social integration. On the desktop version, the user can invoke Action Center by clicking on its icon on the taskbar (at the lower right corner of the screen), or by swiping from the right. Microsoft announced at Microsoft Build 2016 that Cortana would be able to mirror notifications between the Actions Centers of Windows 10 Mobile and Windows 10, and Cortana would also be able to synchronize notifications from Android devices to the Windows 10 Action Center. See also Notification Center (iOS and OS X) Notification service References Windows 10 Windows Phone Windows Phone software Action Centre
Action Center
Technology
364
29,924,238
https://en.wikipedia.org/wiki/Tomb%20Raider%20%282013%20video%20game%29
Tomb Raider is a 2013 action-adventure game developed by Crystal Dynamics and published by Square Enix's European branch. It is the tenth main entry and a reboot of the Tomb Raider series, acting as the first instalment in the Survivor trilogy that reconstructs the origins of Lara Croft. The game was released for PlayStation 3, Windows, and Xbox 360 on 5 March 2013. Gameplay focuses on survival, with exploration when traversing the island and visiting various optional tombs. It is the first game in the main series to have multiplayer and the first game in the series to be published by Square Enix after the latter's acquisition of Eidos Interactive in 2009. Crystal Dynamics began development of Tomb Raider soon after the release of Tomb Raider: Underworld in 2008. Rather than a sequel, the team decided to reboot the series, re-establishing the origins of Lara Croft for the second time, as they did with Tomb Raider: Legend. Tomb Raider is set on Yamatai, an island from which Lara, who is untested and not yet the battle-hardened explorer she is in other titles in the series, must save her friends and escape while being hunted down by a malevolent cult. Camilla Luddington was hired to voice and perform as Lara Croft, replacing Keeley Hawes. Tomb Raider received critical acclaim, with praise for the graphics, gameplay, Luddington's performance as Lara, and Lara's characterization and development, although the addition of a multiplayer mode was not well received. The game sold over 14.5 million units by October 2021, making it the best-selling Tomb Raider title to date. A remastered version, Tomb Raider: Definitive Edition, was released for PlayStation 4 and Xbox One in January 2014 and for Windows in April 2024, containing improved graphics, new control features, and downloadable content. A sequel, Rise of the Tomb Raider, was released in November 2015 and a third installment, Shadow of the Tomb Raider, was released in September 2018. Gameplay Tomb Raider is presented in third-person perspective. Players take control of the series lead character Lara Croft. The game uses an interconnected hub-and-spoke model that combines action-adventure, exploration, and survival elements. Players can traverse between the camps and across the island using footpaths, improvised or already-available ziplines and climbable tracks. Many of Lara's moves are carried over from the previous games created by Crystal Dynamics, with some tweaks added, such as incorporating elements of stealth gameplay. Quick time events are scattered at regular intervals throughout the game, often appearing at crucial or fast-moving points in the game's plot, such as extracting a shard of metal, and escaping a collapsing cave. The combat of the game borrows multiple elements from Naughty Dog's Uncharted series, with players having the ability to free-aim Lara's bow and the guns she salvages, engage in close-quarter combat and perform stealth kills. Players can use Survival Instinct, an ability in which enemies, collectables and objects pivotal to environmental puzzles will be highlighted. The game incorporates role-playing elements: as players progress through the game, they earn experience points from performing certain actions and completing in-game challenges linked with hunting, exploring and combat: this enables players' skills and abilities to be upgraded in specific ways, such as giving her more storage capacity for arrows and ammunition. Players can upgrade and customize weapons using salvaged materials collected across the island. There is a character progression mechanic in the game: better items, weapons and equipment are gained as players progress, though the appearance of most of these items is closely linked to events in the story. In addition to the main story, players can complete multiple side quests, explore the island, revisit locations, and search for challenge tombs. Multiplayer Alongside the single-player mode is an online multiplayer mode, which allows players to compete in several maps. In each multiplayer match, there are two enemy teams: four survivors and four scavengers, and there are three types of games for multiplayer to compete in, played in five different maps: the modes are Team Deathmatch, Private Rescue and Cry for Help. The first mode is a player versus player (PvP) combat scenario, with teams pitted against each other, and the winning team being the one to kill the opposing team in three separate matches. In the second mode, the "survivors" team must take medical supplies to a specific point on the map, while the "scavengers" must reach a certain number of kills, both within a ten-minute time limit. The third mode, "Cry for Help", involves the survivors exploring the maps and retrieving batteries for defended radio beacons while being hunted by the scavengers. Across all three modes, weapons and destroyable environments from the single-player campaign are carried over. Synopsis Setting and characters The game is set on Yamatai, a fictional lost island in the Dragon's Triangle off the coast of Japan. The island—and the kingdom that once existed there—is shrouded in mystery, given its reputation for fearsome storms and shipwrecks that litter its coastline. Yamatai was once ruled by queen Himiko, known as the "Sun Queen", who, according to legend, was blessed with shamanistic powers that enabled her to control the weather. Very little is known about Yamatai's history in the time since Himiko's death, other than that the island's infamy was established shortly thereafter. In exploring the island, the player may find evidence that—among others—Portuguese traders, United States Marines, and a Japanese military project were all stranded on Yamatai at various points throughout history. At the start of the game, the island is populated by the Solarii Brotherhood, a violent cult of criminals, mercenaries, and shipwreck survivors. The Solarii Brotherhood has established its own society based on the worship of Himiko, complete with a social structure and laws, with their exact purpose and intentions being explored throughout the story. The player takes on the role of Lara Croft, a young and ambitious archaeology graduate whose theories on the location of the lost kingdom of Yamatai have convinced the Nishimura family—descendants from the people of Yamatai themselves—to fund an expedition in search of the kingdom. The expedition is led by Dr. James Whitman, a celebrity archaeologist who has fallen on hard times and is desperate to avoid bankruptcy, and is accompanied by Conrad Roth, a Royal Marine turned adventurer and close friend of the Croft family who serves as mentor to Lara; Samantha "Sam" Nishimura, Lara's friend and a representative of the Nishimura family who films the expedition for a documentary; Joslyn Reyes, a skeptical and temperamental mechanic and single mother; Jonah Maiava, an imposing and placid fisherman who is willing to believe in the existence of the paranormal and esoteric; Angus "Grim" Grimaldi, the gruff helmsman of the Endurance; and Alex Weiss, a goofy electronics specialist. Plot Lara sets out on her first expedition aboard the ship Endurance, intending to find the lost kingdom of Yamatai. By her suggestion and against Whitman's advice, the expedition ventures into the Dragon's Triangle. The ship is struck by a violent storm and sinks, stranding the survivors on the isolated island. Lara is separated from the others and is forced to escape the cave of a deranged savage. As Lara locates the other survivors, she finds more evidence that the island is inhabited. She finds her friend Sam and a man called Mathias, who claims to be one of the passengers. As Sam tells Mathias the legends of Himiko, Lara passes out; when she wakes, Mathias and Sam are gone. When Lara reunites with the other survivors, Whitman decides to break off from the main party with Lara and search for Roth, who is still missing, while the rest of the group (Reyes, Jonah, Alex, and Grim) look for Sam and Mathias. As Lara and Whitman explore, they discover that the island's inhabitants are worshipping Himiko, confirming that the island is Yamatai. The two are captured by the islanders and taken to a settlement along with other survivors from the Endurance. When the survivors attempt an escape, the captors turn on them. Lara is separated from Whitman and is forced to kill one of her attackers. She locates an injured Roth, and using his equipment, she sets off for a communications relay at the very top of the mountain to contact the outside world and call for aid. After successfully hailing a plane searching for the Endurance and setting a signal fire for them to follow, Lara witnesses a fierce storm materialize and destroy the plane. Although the pilot successfully parachutes to safety, Lara is powerless to stop the island's inhabitants from killing him. Lara is contacted by Alex and Reyes, who reveal that Sam has been kidnapped by the island's inhabitants, a violent cult known as the Solarii Brotherhood. Lara, who is closest to Sam's position, tries to rescue her but is foiled by Mathias—revealed to be the leader of the Solarii—who orders her killed. Lara is saved by the intervention of a samurai dubbed "Oni" and taken to an ancient monastery in the mountains. Escaping again, Lara stumbles upon a ritual chamber, where she learns that a "fire ritual" was used to choose the Sun Queen's successor as part of a ceremony called the "Ascension". A terrified Sam manages to contact Lara and informs her that the Solarii intends to put her through the fire ritual, which will burn her to death if unsuccessful. Lara fights her way through the Solarii fortress with help from Grim, who is killed after the Solarii captures him. With Roth's help, Lara infiltrates the palace and witnesses Mathias putting Sam through the fire ritual. Lara tries to save Sam, but she is overpowered by Mathias and his men. Sam is not harmed by the flames, which are extinguished by a great wind, marking her as Himiko's rightful successor. Lara narrowly escapes captivity once again and doubles back to help her friends, whose attempts to reach Sam have resulted in their capture. Aided by Whitman—who managed to negotiate some degree of freedom with the Solarii—Lara returns to the palace to rescue Sam as Roth commandeers a helicopter to get them out. Having witnessed the storm that forced the search plane to crash, Lara sends Sam to escape by land and tries to force the pilot to land as a second storm brews up, striking the helicopter and forcing them to crash. Lara nearly dies, and Roth is fatally wounded by Mathias while saving her. Lara realizes that the storms are being magically generated to keep everyone trapped on the island. She meets up with the other survivors, who have evaded the Solarii long enough to secure a boat that can be repaired and used to escape. They are joined by Whitman, who claims to have escaped, though Lara suspects him of working with the cultists. Lara heads for the wreck of the Endurance to meet up with Alex, who had previously gone there to salvage the tools needed to repair the boat. She finds him trapped under the wreckage, but Alex forces her to flee from Solarii cultists and sacrifices himself so Lara can escape with the tools. Following the lead of a World War II-era Japanese military expedition researching the storms, Lara explores an ancient coastal tomb. She discovers the remains of the general of the Stormguard—the Oni defending the monastery—who had committed seppuku; in his final message, he reveals that Himiko's successor took her own life rather than receive her power, leaving Himiko's soul trapped in her body after death. Lara realizes that the "Ascension" is a ritual that transfers Himiko's soul into a new body, destroying the host's soul in the process. Himiko's spirit wants to escape its current body, and Mathias plans to offer Sam as a new host. Lara returns to the survivors to find that Whitman has betrayed them, abducting Sam and giving her to Mathias. Lara, Jonah, and Reyes give chase to the monastery, with Lara arriving just in time to see Whitman killed by the Oni. After fighting her way through the queen's guards, Lara arrives at the top of the monastery in time to see Mathias start the Ascension ritual. She works her way to Mathias, confronting Solarii and guards alike. Lara kills Mathias when she shoots him from the roof of the monastery using her signature dual-wield style, before destroying Himiko's remains to save Sam. With the storms dispersed, Lara, Sam, Reyes, and Jonah leave the island and are picked up by a cargo ship. As she and her friends sail home, Lara decides that there are many more myths to be found and resolves to uncover them, stating that she is not returning home just yet. Development Following Tomb Raider: Underworld, Crystal Dynamics was split into two teams; one beginning work on the next sequential pillar of the Tomb Raider franchise, the other focusing on the newly created spin-off Lara Croft series (debuting with Lara Croft and the Guardian of Light in 2010). Following pre-announcement media hype while the game's title was under embargo, in November 2010, Square Enix filed for trademark of the slogan for the new Tomb Raider game; "A Survivor is Born". Square Enix revealed in December that Tomb Raider was in production for nearly 2 years. Studio head Darrell Gallagher said that the new title is unlike anything what was before, describing it as an origin story of Lara Croft and her journey on a new way. In January 2012, when asked if the game would be available on Nintendo's Wii U console, Crystal Dynamics global brand director Karl Stewart responded that there were no plans to have the game available on that platform. According to Stewart, the reason for this was that "it would not be right" for the game to simply be ported, as the developers built the game to be platform-specific before the Wii U was announced, and also mentioned that if they started building the game for the platform, the team would have build it very differently and with unique functionality. The multiplayer mode was created by Canadian video game development studio Eidos-Montréal, known for making Deus Ex: Human Revolution. That May, the game was delayed and was scheduled for the first quarter of 2013. Darrell Gallagher said that they were "doing things that are completely new" for this title and was the reason for delay. The Definitive Edition framerate is unlocked on PlayStation 4, varying from 32 to 60fps (averaging 53.36fps). The Xbox One version is locked to 30fps (averaging 29.98fps); both versions of the game have a resolution of 1080p. Animated model Lara Croft's model is animated using compiled performance capture, a technique used in the previous installment Tomb Raider: Underworld. The game was built on Crystal Dynamics' game engine called "Foundation". Lara's face is based on that of model Megan Farquhar. "Turning Point" CGI teaser trailer premiered at the E3 held in June 2011, emphasizing the release date was to be in the third quarter of 2012. The trailer was produced by Square Enix's CGI studio Visual Works. Voice cast Keeley Hawes did not return as Lara Croft for 2013's Tomb Raider, after working on Tomb Raider: Legend, Anniversary, Underworld and Lara Croft and the Guardian of Light. She reprised the role of Lara in the downloadable game Lara Croft and the Temple of Osiris, which was released in December 2014. Crystal Dynamics was said to be auditioning dozens of voice actresses. The voice actress of Lara Croft was revealed to be Camilla Luddington in June 2012. Gameplay showcases The gameplay trailer was released online in May 2012, showcasing more action-based gameplay along with varying plot elements. The trailer confirmed the presence of several other non-playable characters besides Lara on the island, many of which appear to be part of a menacing organization. On 4 June, at Microsoft's E3 2012 press conference, a new gameplay demonstration was shown, depicting environmental destruction and other interactivity, stealth combat using a bow and arrow, quick-time events and parachuting. During the summer, gameplay was shown of Lara hunting, exploring the island and killing for the first time. They were shown at Eurogamer Expo at London on 27 September. On 8 December, a new trailer was shown during Spike Video Game Awards. At the beginning, an introduction was made by Camilla Luddington and during the event, the trailer was followed by a musical orchestra, led by music composer Jason Graves. The next week, IGN presented: Tomb Raider Week. Each day from Monday to Friday, previews, features and trailers were released, showing more details for the upgrading system, survival tools and challenge tombs. Tomb Raider went gold on 8 February 2013. Music Tomb Raiders soundtrack was composed by Jason Graves, whose previous work includes Dead Space and its sequels, F.E.A.R. 3 and Star Trek: Legacy. The Tomb Raider: Original Soundtrack was released on 5 March 2013, alongside the game's worldwide release. The album was released to critical acclaim, with multiple sites including Forbes and the magazine Film Score Monthly giving it high praise. A podcast was released by Game Informer in December 2010, featuring a "sneak peek at a track from the game itself" composed by Aleksandar Dimitrijevic. Crystal Dynamics global brand director, Karl Stewart, clarified Game Informer's statement, confirming that Alex Dimitrijevic was hired to score the trailer, but they did not find the official composer for the game. After the trailer's première in June 2011, Stewart stated in regard to the final Turning Point score that "...this piece is not a piece that [Alex Dimitrijevic]'s worked on". Meagan Marie, community manager at Crystal Dynamics, expressed on the Tomb Raider blog that their goal was to release a soundtrack. Stewart added that "this is a completely new composer and somebody who we've brought in to work on the game as well as this [trailer] piece" and that "we're going to make a bigger announcement later in the year". In the Making of Turning Point, sound designer Alex Wilmer explained that the unannounced composer had remotely directed an in-house concert violinist to perform the "very intimate" piece. In the fourth Crystal Habit podcast which premiered at the Tomb Raider blog in October 2011, Marie spoke to Wilmer and lead sound designer Jack Grillo about their collaboration with the unannounced composer. Grillo stated that "we're doing this overture... where we're taking an outline of the narrative structure and having our composer create different themes and textures that would span the entire game" while Wilmer emphasised that the composer's music will dynamically adapt in-game; scored "...emotionally so that it reacts instantly to what happens". In an episode of The Final Hours of Tomb Raider on YouTube, the composer was revealed as Jason Graves. Apart from his trademark orchestral style, Graves wished to create a signature sound that would impress on players and stand out when heard. Along with using objects like mallets to create odd musical sounds, Graves, with the help of neighbouring architect Matt McConnell, created a special percussion instrument that would create a variety of odd signature sounds to mix in with the rest of the orchestral score. Although the location was set in the locale of Japan, Graves did not want Japanese instrumentation: instead, he chose sounds and themes that would be indicative of the scavengers on the island, who came from multiple regions of the globe. Using different percussion instruments in different ways, he was able to create the feeling of "founds sounds". Release Tomb Raider was released as scheduled on 5 March 2013 for PlayStation 3, Xbox 360 and Microsoft Windows, but was released earlier in Australia on 1 March. On 25 April, Tomb Raider was released in Japan. A ported version of the game to the Mac OS X was released by Feral Interactive on 23 January 2014. Tomb Raider: Definitive Edition, a graphically updated version containing new control features and all downloadable content, was released worldwide in January, 2014 for PlayStation 4 and Xbox One. Tomb Raider: Game of the Year Edition, a different version also including all bonuses but without the graphical enhancements, was also released that month for Xbox 360 and PlayStation 3 and PC. Tomb Raider: Definitive Edition was also released for Windows in April 2024 exclusively for the Microsoft Store. Unlike the previous installments that received a T rating, Tomb Raider is the first game in the series to receive an M rating by the ESRB, due to blood and gore, intense violence and strong language. Pre-release incentives and retail editions Prior to the game's release, various stores offered extra items as a way of attracting customers to order the game from their store. In North America, GameStop offered the in-game Challenge Tomb. Best Buy orders received the graphic novel Tomb Raider: The Beginning. These orders also came with the Aviatrix Skin as well as the Shanty Town multiplayer map. Walmart orders received a free digital download of Lara Croft and the Guardian of Light, access to a real-life scavenger hunt, the Shanty Town multiplayer map and an exclusive Guerrilla Skin outfit. Pre-orders from Microsoft Store received 1600 Microsoft Points for Xbox Live. Customers ordering from Amazon received access to the Tomb Raider: The Final Hours Edition, including with a 32-page art book, an in-game Hunter Skin for Lara, and a digital copy of Geoff Keighley's The Final Hours of Tomb Raider for the Kindle Fire. Customers received the Shanty Town multiplayer map and an access code to a real-life scavenger hunt. Customers who purchased from Steam received a free copy of Lara Croft and the Guardian of the Light, a Challenge Tomb entitled Tomb of the Lost Adventurer and the Shanty Town multiplayer map. Steam offered three exclusive bonus Team Fortress 2 items. In the United Kingdom, ShopTo.net also offered the graphic novel Tomb Raider: The Beginning. Orders from Amazon.co.uk received the Shanty Town multiplayer map. Exclusive for Europe is the Survival Edition. This edition comes with a mini art book, double sided map of the in-game island, CD soundtrack, an exclusive weapons pack, and a survival pouch. The Collector's Edition for Europe contains everything from the Survival Edition along with an 8" Play Arts Kai Lara Croft figurine in a metal box. The Collector's Edition for North America is similar to the European one, but instead of a mini art book and a survival pouch it contains three iron-on badges and a lithograph. The Survival Edition from Steam includes a digital 32-page art book, 10 downloadable tracks from the Tomb Raider soundtrack, a digital double sided map of the game's island, a digital comic, the Guerilla Skin outfit and three in-game weapons from Hitman: Absolution. In the United Kingdom, Game offered the exclusive Explorer Edition bundle, which included an exploration themed Challenge Tomb and a skill upgrade. Exclusive to Tesco was the Combat Strike Pack, which included three weaponry upgrades and a skill upgrade. A limited edition wireless controller for the Xbox 360 was released on 5 March 2013. A download code for an Xbox exclusive playable Tomb Raider multiplayer character was included. Downloadable content At E3 2012, during Microsoft's press conference, Crystal Dynamics' Darrell Gallagher unveiled that Xbox 360 users would get early access to downloadable content (DLC). In March 2013, Xbox Live users had early access to the "Caves & Cliffs" map pack. The map pack consisted of three new Tomb Raider multiplayer maps, entitled "Scavenger Caverns", "Cliff Shantytown" and "Burning Village". The pack later became available for PSN and Steam users in April. The "1939" multiplayer map pack was released for Xbox 360, PS3 and PC, consisting of two new multiplayer maps, entitled "Dogfight" and "Forest Meadow". Later in April, Square Enix released a Japanese Language Pack on Steam. A multiplayer DLC pack was released on 7 May, entitled "Shipwrecked", on Xbox Live, PSN and Steam, offering two additional multiplayer maps, "Lost Fleet" and "Himiko's Cradle". Additionally, a single player outfit pack was released on Xbox Live. The pack contained the Demolition, Sure-Shot and Mountaineer outfits. Reception Tomb Raider received critical acclaim. On review aggregator website Metacritic, it garnered "generally favorable reviews". GamesMaster magazine gave the game a score of 90%, as well as the "GamesMaster Gold award" (awarded to games that manage a score of 90% or above). The editor regarded the quality of the visuals, the length and depth of the gameplay, and the "spectacular" last third of the game as the highlights. IGN's Keza MacDonald spoke extremely positively, stating that they felt the game was "exciting" and "beautifully presented", included "great characterization" and more depth. They gave the game an overall score of 9.1 out of 10, the highest score they have given a game in the series since 1996's Tomb Raider, describing it as "amazing" and concluding that the game "did justice" to both the character and franchise. Ryan Taljonick of GamesRadar lauded the location's setting and environment, and expressed that the areas never feel like a rehash of another. Taljonick also felt that the game had great pacing, and was unrivaled by any other game in the genre. Furthermore, the reviewer considered Lara's character development as "an integral part" of the whole game's experience, and concluded that Tomb Raider "is a fantastic game and an excellent origin story for one of gaming's original treasure seekers". Australian TV show Good Game praised the game: it was rated 10/10 by both hosts, becoming the eighth game in the show's seven-year run to do so. Giant Bomb gave the game four stars out of five, only having a minor issues with the game's tone at conflict with its action. One of the major criticisms of the game stemmed from a disparity between the emotional thrust of the story and the actions of the player, with GameTrailers' Justin Speer pointing out that while the story attempted to characterise Lara Croft as vulnerable and uncomfortable with killing, the player was encouraged to engage enemies aggressively and use brutal tactics to earn more experience points. Speer felt that this paradoxical approach ultimately let the game down as it undermined Lara's character to the point where he found it difficult to identify with her at all. IGN's Keza MacDonald highlighted the same, but was less critical of it than Speer, pointing out that both Lara and the player had to adapt quickly to killing in order to survive. However, Game Informers Matt Miller noted that the game offered the player several options for progressing through its combat situations, and that the player could avoid open conflict entirely if they chose to do so. He praised the behaviour and presence of the enemies for the way they felt like they had actual tasks to perform on the island, rather than being clusters of polygons whose only function was to be killed by the player in order for them to progress. While on the subject of character development, GamesRadar's Ryan Taljonick expressed that the supporting characters were underdeveloped relative to Lara Croft, describing them as generic and, while rarely annoying, not memorable. While many reviews applauded the single-player campaign, the multiplayer mode bore the brunt of the game's criticism, with MacDonald, Speer and Miller all finding fault with it, describing it as lackluster and stating that the difference between the developer's vision for the game mode and the finished product made it difficult to enjoy. Tomb Raider: Definitive Edition received positive reviews. Game Informers Matt Helgeson considered the updated graphics at native 1080p resolution as a good addition to the core Tomb Raider experience. He cited some differences in graphics between the two versions and noted a bit smoother frame-rate on the PlayStation 4 version. The Escapist's Jim Sterling was less receptive to the Definitive Edition; they praised the visual improvements, but felt that nominal content additions to the single-player experience and the game's price point made it difficult to recommend to players outside of those who had not played the original version. GameZone Matt Liebl gave Tomb Raider: Definitive Edition a 9/10 and recommended for the players who never played the original version. Prior to the game's release, news of an attempted rape plot element drew ire and led to multiple op-ed pieces. A developer interview described an early cutscene as an attempted "rape" that proves formative in Croft's genesis story, but the developer later reiterated that sexual assault was not a theme of the game and that the executive producer had misspoken. Sexual assault and women had already been a volatile topic in games journalism. Tomb Raider lead writer later reflected that the controversy was the result of misinformation. Sales The game sold more than 1 million copies less than 48 hours after its release. In the United Kingdom, Tomb Raider debuted at number one on the charts, and became the biggest UK title launch in 2013, surpassing the sales of Aliens: Colonial Marines, before being overtaken by Grand Theft Auto V. Tomb Raider set a new record for the franchise, more than doubling the debut sales of Tomb Raider: Legend. Furthermore, the Xbox 360 and PlayStation 3 versions of Tomb Raider set new week one records as the fastest-selling individual formats of any Tomb Raider title so far, a record which was previously held by Tomb Raider: The Angel of Darkness. Tomb Raider topped the charts in France, Ireland, Italy, the Netherlands, Norway, and the United States. In the United States, Tomb Raider was the second best-selling title of March, excluding download sales, behind BioShock Infinite. In Japan, Tomb Raider debuted at number four with 35,250 units sold. Three weeks after its release on March 26, Square Enix announced that the game sold 3.4 million copies worldwide at retail, but has failed to reach predicted sales targets. Crystal Dynamics however defended Tomb Raiders sales, stating the reboot had the "most successful launch" of any game that year in addition to setting a new record for highest sales in the franchise's history. On 22 August, Darrell Gallagher, head of product development and studios for Square Enix, said to Gamasutra that the game had sold more than 4 million copies worldwide. In the United Kingdom, Tomb Raider was the 6th best-selling boxed game of 2013. In January 2014, Scot Amos, executive producer of Tomb Raider, revealed that at the end of 2013 the game achieved profitability. On 3 February, Tomb Raider: Definitive Edition, a re-release for PlayStation 4 and Xbox One, debuted atop the UK charts. Gallagher predicted on March 6 that the game would surpass 6 million units by the end of the month. By April 2015, Gallagher announced that the sales had reached 8.5 million, making the game the best-selling Tomb Raider title to date. , the game has sold more than 14.5 million copies. Awards The game was nominated for numerous best of E3 awards. Sequels Tomb Raider: The Beginning, a 48-page hardcover graphic novel, written by the game's lead writer Rhianna Pratchett and published by Dark Horse Comics and telling the story of "how the ill-fated voyage of the Endurance came to be" was released with multiple editions in 2013 and was later included with the Game of the Year Edition and Definitive Edition. Comic book writer Gail Simone was hired in 2013 to continue the reboot's story in a line of comics published by Dark Horse Comics. The series, simply called Tomb Raider, is set between the game and its sequel and the story leads directly into a sequel. Later, at the beginning of August, Square Enix's Western CEO Phil Rogers confirmed that a sequel to Tomb Raider was being developed for unspecified next-gen consoles. In an interview later that year, Brian Horton, the senior art director for Crystal Dynamics, said that the sequel would tell the next chapter of Lara's development. During Microsoft's E3 2014 presentation, Rise of the Tomb Raider was announced as a sequel, initially exclusive to Xbox consoles at launch. The exclusivity was timed, which meant that the title would see a release on other platforms after an unspecified period of time. Microsoft published the title for its release on Xbox consoles. Rise of the Tomb Raider was released on 10 November 2015 for Xbox One and Xbox 360, and 28 January 2016 for Microsoft Windows. The PlayStation 4 version was released on 11 October 2016, titled the 20 Year Celebration, as it was released 20 years after the original Tomb Raider game. This version includes all of the previously released downloadable content. A third installment, Shadow of the Tomb Raider, was released in September 2018. Film adaptation The 2018 Tomb Raider reboot film adaptation, directed by Roar Uthaug, is in part based on the video game. Alicia Vikander, who portrays Lara Croft, was cast alongside actors Daniel Wu and Walton Goggins. The story follows Lara Croft's search for her father. The film was released on 16 March 2018. References Notes Footnotes External links 2013 video games Action-adventure games Asymmetrical multiplayer video games Crystal Dynamics games Feral Interactive games Linux games MacOS games Multiplayer and single-player video games Nixxes Software games PlayStation 3 games PlayStation 4 games Square Enix games Stadia games Stealth video games Survival video games Tomb Raider games Video games about cults Video game reboots Video games based on Japanese mythology Video games developed in Canada Video games developed in the Netherlands Video games developed in the United Kingdom Video games developed in the United States Video games featuring female protagonists Video games scored by Jason Graves Video games set in Japan Video games set on fictional islands Windows games Xbox 360 games Xbox Cloud Gaming games Xbox One games
Tomb Raider (2013 video game)
Physics
7,007
371,627
https://en.wikipedia.org/wiki/Worshipful%20Company%20of%20Salters
The Worshipful Company of Salters is one of the Great Twelve City Livery Companies, ranking 9th in order of precedence. An ancient merchant guild associated with the salt trade, the Salters' Company originated in London as the Guild of Corpus Christi. History and functions The Salters' Company was first granted a Royal Charter of Incorporation in 1394, with further charters authorising the Company to set standards regulating salt industry products from the City of London. The formal name under which it is incorporated is The Master, Wardens and Commonality of the Art or Mystery of the Salters of London. The Company was originally responsible for the regulation of salt merchants, but began losing control over the trade as the population of London increased and spread outwards from the City after the Industrial Revolution. Until the 19th century, the main use for salt was to preserve food for the winter months. Salt was probably the first traded commodity which if not available locally was imported. Through careful stewardship of financial bequests and funds, the Company now serves as a significant educational and charitable institution whilst maintaining links with its heritage by supporting education in chemistry, for example by awarding scholarships to chemistry and science students, among whom is Sam Carling . Since Sir Robert Bassett in 1475/76, eighteen Salters have served as Lord Mayor of London, the most recent being Sir Richard Nichols in 1997/98. The Master Salter for 2024/25 is Piers Vacher, supported by the Company Wardens, Commoner Andrew McMurtrie , Anthony Cecil, 4th Baron Rockley, and John Stebbing . Since 2019, the Clerk to the Salters' Company is Tim Smith. Salters' Hall The former Salters' Hall in St Swithin's Lane, London EC4, bombed in 1941, was during the 1700s the meeting place of Presbyterians and in 1719 the site of the "Salters' Hall controversy" a notable turning point for religious tolerance in England. The present Salters' Hall on Fore Street, EC2 dates from 1976, designed by architect Sir Basil Spence, being Grade II-listed in 2010. A major redevelopment by architects de Metz Forbes Knight including a new entrance pavilion was completed in 2016. Salters' Institute Established in 1918 as the Salters' Institute of Industrial Chemistry to support chemistry students after the First World War, particularly those whose studies had been interrupted by military service, the Salters' Company's educational charity awards prizes for students of chemistry, chemical engineering, biology and physics (plus science technicians), as well as running various activities to promote the study of science. Coat of arms The Company received a grant of arms in 1530 from Thomas Benolt, then its crest and supporters in 1591 from Robert Cooke, both Clarenceux King of Arms. The Salters' Co. arms are blazoned: Escutcheon: Per chevron Azure and Gules three Covered Salts Argent garnished Or. Crest: On a Wreath of the Colours a Cubit Arm erect issuing from Clouds all Proper holding a Covered Salt Argent garnished Or. Supporters: Two Otters Sable bezanty ducally gorged and chained Or. Its motto is Sal Sapit Omnia, Latin for Salt Savours All. See also Drysalter Zunft References External links The Salters' Company www.cityoflondon.gov.uk Salters 1559 establishments in England Companies of medieval England Great Twelve City Livery Companies Chemical industry in the United Kingdom Chemistry trade associations
Worshipful Company of Salters
Chemistry
707
23,536,766
https://en.wikipedia.org/wiki/Touton%20giant%20cell
Touton giant cells are a type of multinucleated giant cell observed in a myriad of pathological disorders and conditions. Specifically, Touton giant cells are found in lipid-rich lesions such as those of fat necrosis, xanthoma, xanthelasma and xanthogranulomas. Touton giant cells are also referred to as xanthelasmatic cells due to the fact they are found in lesions associated with xanthomas which are skin growths with yellow, lipid filled deposits. Touton giant cells are often frequently observed in granulomatous inflammation, which is a type of inflammation caused by the clustering of immune cells, or granulomas. They are also found in dermatofibroma. Touton giant cells are commonly characterized by their very unqiue histological appearance as well as their response to various stimuli associated with the body's immune system. History Touton giant cells are named for Karl Touton, a German botanist and dermatologist. Karl Touton first observed these cells in 1885 and named them "xanthelasmatic giant cells", a name which has since fallen out of favor. Karl Touton observed these giant cells when examining a biopsy or skin tissue sample from someone with a lesion under a microscope. He then classified and named these cells due to their strikingly unique appearance. Touton giant cells are still observed using these methods as well as staining with histological dyes such as hematoxylin and eosin (H&E). Appearance Touton giant cells, being multinucleated giant cells, can be distinguished by the presence of several nuclei in a distinct pattern. This pattern is described as a ring-like or wreath-like in the center of a cell. These cells contain a ring of nuclei surrounding a central homogeneous cytoplasm, while foamy cytoplasm surrounds the nuclei. The cytoplasm is usually lipid-rich and has a foamy appearance. The cytoplasm is divided into two distic areas: the peripheral zone and the central zone. The central zone is the cytoplasm surrounded by the nuclei which is described as both amphophilic and eosinophilic. Meanwhile, the cytoplasm near the periphery of the cell, the peripheral zone, is pale and contains vacuoles due to the lipid content in this zone of the cell. Activation and macrophage relationship Touton giant cells are formed by the fusion of macrophage-derived foam cells. It has been suggested that cytokines (signaling molecules) such as interferon gamma, interleukin-3, interleukin-6 (IL-6) and M-CSF may be involved in the production of Touton giant cells. Specifically, Touton giant cells are said to be derived from macrophages that aid directly in reducing inflammation. They have reparative behavior and by using IL-6, a cytokine, these cells are activated and able to perform tissue repair. Although the specific fusion molecule associated with fusing macrophages to form Touton giant cells is not very well understood, it seems as though there is an association with the activation of Toll-like receptors (TLRs). Further proof that these Touton giant cells are histiocytic in origin, meaning they arise from a macrophage-lineage cell, is the fact they react positively to enzymes found in histiocytes such as lysozyme, alpha 1-anti-trypsin and alpha 1-anti-chymotrypsin. Touton giant cells are able to express these proteins which are involved in actions such as regulation of tissue damage, tissue breakdown, inflammation and more, which are common actions of a Touton giant cell. Correspondence with immune system Touton giant cells are considered white blood cells due to their role in the immune system as well as where they are derived from. These multinucleated giant cells are formed by the fusion of macrophages, a type of white blood cell that has many functions such as removing dead cells and stimulating the action of other immune cells. Macrophages are derived from monocytes, white blood cells that aid in destroying bacteria and germs to prevent infection. These monocytes arise from the myeloid stem cell line. Touton giant cells aim to remove harmful substances in the tissue in which they are from. They do so by engulfing and degrading large foreign materials such as lipids in the lesions they are found in, most commonly in areas of fat necrosis. Associated conditions Conditions associated with Touton giant cells are ones that involve lipid metabolism or chronic inflammation. Some of these conditions include xanthomas: lesions that are seen in hyperlipidemia; xanthogranuloma: benign skin lesions; fat necrosis: areas of trauma where adipose tissue has been disrupted; dermatofibrosa: benign skin tumor characterized by fibrous components; granulomatous diseases such as sarcoidosis. References External links Cytokines & Cells Online Pathfinder Encyclopaedia Cell biology
Touton giant cell
Biology
1,083
31,586,191
https://en.wikipedia.org/wiki/Dana%2030
The Dana/Spicer Model 30 is an automotive axle manufactured by Dana Holding Corporation. It has been manufactured as a beam axle and independent suspension axle with several versions. General specifications Ring Gear measures OEM Inner axle shaft spline count: 27 GAWR up to 2770 lbs. Dana 30 solid axles Dana 23 The Dana Spicer 23 is an axle the Dana 30 is loosely based, with improvements throughout time. This axle was only made for the rear of vehicles. Full floating and semi floating variations were produced. Dana 25 The Dana Spicer 25 was based on the Dana 23 and was made only as a front axle for four-wheel drive vehicles. This was the company's first front drive axle. Dana 27 The Dana Spicer 27 unit phased out Dana 23 and Dana 25 units in the 1960s Independent front suspension Dana 30 axle Jeep Liberty 4x4 models use the Dana 30 in the form of independent suspension in the front (IFS). The AMC Eagle front axle is also a Dana 30 IFS. References Automotive engineering Automobile axles
Dana 30
Engineering
212
39,518,173
https://en.wikipedia.org/wiki/Dihydrodipicolinate%20synthase
4-Hydroxy-tetrahydrodipicolinate synthase (EC 4.3.3.7, dihydrodipicolinate synthase, dihydropicolinate synthetase, dihydrodipicolinic acid synthase, L-aspartate-4-semialdehyde hydro-lyase (adding pyruvate and cyclizing), dapA (gene)) is an enzyme with the systematic name L-aspartate-4-semialdehyde hydro-lyase (adding pyruvate and cyclizing; (4S)-4-hydroxy-2,3,4,5-tetrahydro-(2S)-dipicolinate-forming). This enzyme catalyses the following chemical reaction pyruvate + L-aspartate-4-semialdehyde (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate + H2O The reaction proceeds in three consecutive steps. Function This enzyme belongs to the family of lyases, specifically the amine-lyases, which cleave carbon-nitrogen bonds. 4-hydroxy-tetrahydrodipicolinate synthase is the key enzyme in lysine biosynthesis via the diaminopimelate pathway of prokaryotes, some phycomycetes, and higher plants. The enzyme catalyses the condensation of L-aspartate-beta-semialdehyde and pyruvate to 4-hydroxy-tetrahydropicolinic acid via a ping-pong mechanism in which pyruvate binds to the enzyme by forming a Schiff base with a lysine residue. Related enzymes Three other proteins are structurally related to this enzyme and probably also act via a similar catalytic mechanism. These are Escherichia coli N-acetylneuraminate lyase () (protein NanA), which catalyses the condensation of N-acetyl-D-mannosamine and pyruvate to form N-acetylneuraminate; Rhizobium meliloti (Sinorhizobium meliloti) protein MosA, which is involved in the biosynthesis of the rhizopine 3-O-methyl-scyllo-inosamine; and E. coli hypothetical protein YjhH. Structure The sequences of 4-hydroxy-tetrahydrodipicolinate synthase from different sources are well-conserved. The structure takes the form of a homotetramer, in which 2 monomers are related by an approximate 2-fold symmetry. Each monomer comprises 2 domains: an 8-fold α/β-barrel, and a C-terminal α-helical domain. The fold resembles that of N-acetylneuraminate lyase. The active site lysine is located in the barrel domain, and has access via 2 channels on the C-terminal side of the barrel. References Further reading External links EC 4.3.3 Enzymes of known structure Protein domains
Dihydrodipicolinate synthase
Biology
673
41,660,225
https://en.wikipedia.org/wiki/Bismuth%28III%29%20nitrate
Bismuth(III) nitrate is a salt composed of bismuth in its cationic +3 oxidation state and nitrate anions. The most common solid form is the pentahydrate. It is used in the synthesis of other bismuth compounds. It is available commercially. It is the only nitrate salt formed by a group 15 element, indicative of bismuth's metallic nature. Preparation and reactions Bismuth nitrate can be prepared by the reaction of bismuth metal and concentrated nitric acid. Bi + 4HNO3 → Bi(NO3)3 + 2H2O + NO It dissolves in nitric acid but is readily hydrolysed to form a range of oxynitrates when the pH increases above 0. It is also soluble in acetone, acetic acid and glycerol but practically insoluble in ethanol and ethyl acetate. Some uses in organic synthesis have been reported for example the nitration of aromatic compounds and selective oxidation of sulfides to sulfoxides. Bismuth nitrate forms insoluble complexes with pyrogallol and cupferron and these have been the basis of gravimetric methods of determining bismuth content. On heating bismuth nitrate can decompose forming nitrogen dioxide, NO2. Structure The crystal form is triclinic, and contains 10 coordinate Bi3+, (three bidentate nitrate ions and four water molecules). References Bismuth nitrate Nitrates
Bismuth(III) nitrate
Chemistry
311
14,455,317
https://en.wikipedia.org/wiki/Cutinase
The enzyme cutinase (systematic name: cutin hydrolase, EC 3.1.1.74) is a member of the hydrolase family. It catalyzes the following reaction: R1COOR2 + H2O -> R1COOH + R2OH In biological systems, the reactant carboxylic ester is a constituent of the cutin polymer, and the hydrolysis of cutin results in the formation of alcohol and carboxylic acid monomer products. Nomenclature Cutinase has an assigned enzyme commission number of EC 3.1.1.74. Cutinase is in the third class of enzymes, meaning that its primary function is to hydrolyze its substrate (in this case, cutin). Within the third class, cutinase is further categorized into the first subclass, which indicates that it specifically hydrolyzes ester bonds. It is then placed in the first sub-subclass, meaning that it targets carboxylic esters, which are those that join together cutin polymers. Function Most plants have a layer composed of cutin, called the cuticle, on their aboveground surfaces such as stems, leaves, and fruits. This layer of cutin is formed by a matrix-like structure that contains waxy components embedded in the carbohydrate layers. The molecule, cutin, which composes most of the cuticle matrix (40-80%), is composed primarily of fatty acid chains that are polymerized via carboxylic ester bonds. Research suggests that cutin plays a critical role in preventing pathogenic infections in plant systems. For instance, experiments conducted on tomato plants that had a substantial inability to synthesize cutin found that the tomatoes produced by those plants were significantly more susceptible to infection by both opportunistic pathogens and intentionally inoculated fungal spores. Cutinase is produced by a variety of fungal plant pathogens, and its activity was first detected in the fungus, Penicillium spinulosum. In studies of Nectria haematococca, a fungal pathogen that is the cause of foot rot in pea plants, cutinase has been shown to play key roles in facilitating the early stages of plant infection. It is also suggested that fungal spores that make initial contact with plant surfaces, a small amount of catalytic cutinase produces cutin monomers which in turn up-regulate the expression of the cutinase gene. This proposes that the expression pathway of cutinase in fungal spores is characterized by a positive feedback loop until the fungus successfully breaches the cutin layer; however, the specific mechanism of this pathway is unclear. Inhibition of cutinase has been shown to prevent fungal infection through intact cuticles. Conversely, the supplementation of cutinase to fungi that are not able to produce it naturally had been shown to enhance fungal infection success rates. Cutinases have also been observed in a few plant pathogenic bacterial species, such as Streptomyces scabies, Thermobifida fusca, Pseudomonas mendocina, and Pseudomonas putida, but these have not been studied to the extent as those found in fungi. The molecular structure of the Thermobifida fusca cutinase shows similarities to the Fusarium solani pisi fungal cutinase, with congruencies in their active sites and overall mechanisms. Structure Cutinase belongs to the α-β class of proteins, with a central β-sheet of 5 parallel strands covered by 5 alpha helices on either side of the sheet. Fungal cutinase is generally composed of around 197 amino acid residues, and its native form consists of a single domain. The protein also contains 4 invariant cysteine residues that form 2 disulfide bridges, whose cleavage results in a complete loss of enzymatic activity. Crystal structures have shown that the active site of cutinases is found on one end of the ellipsoid shape of the enzyme. This active site is seen flanked by two hydrophobic loop structures and partly covered by 2 thin bridges formed by amino acid side chains. It does not possess a hydrophobic lid, which is a common constituent feature among other lipases. Instead, the catalytic serine in the active site is exposed to open solvent, and the cutinase enzyme does not show interfacial activation behaviors at an aqueous-nonpolar interface. Cutinase activation is believed to be derived from slight shifts in the conformation of hydrophobic residues, acting as a miniature lid. The oxyanion hole in the active site is a constituent feature of the binding site, which differs from most lipolytic enzymes whose oxyanion holes are induced upon substrate binding. Mechanism Cutinase is a serine esterase, and the active site contains a serine-histidine-aspartate triad and an oxyanion hole, which are signature elements of serine hydrolases. The binding site of the cutin lipid polymer consists of two hydrophobic loops characterized by nonpolar amino acids such as leucine, alanine, isoleucine, and proline. These hydrophobic residues show a higher degree of flexibility, suggesting an induced fit model to facilitate cutin bonding to the active site. In the cutinase active site, histidine deprotonates serine, allowing the serine to undergo a nucleophilic attack on the cutin carboxylic ester. This is followed by an elimination reaction whereby the charged oxygen (stabilized by the oxyanion hole) creates a double bond, removing an R group from the cutin polymer in the form of an alcohol. The process repeats with a nucleophilic attack on the new carboxylic ester by a deprotonated water molecule. Following this, the charged oxygen reforms its double bond, removing the serine attachment and releasing the carboxylic acid R monomer. Applications The stability of cutinases in higher temperatures (20-50 °C) and its compatibility with other hydrolytic enzymes has potential applications in the detergent industry. In fact, it has been shown that cutinases are more efficient at cleaving and eliminating non-calcium fats from clothing when compared against other industrial lipases. Another advantage of cutinase in this industry is its ability to be catalytically active with both water- and lipid-soluble ester compounds, making it a more versatile degradative agent. This versatility is also subjecting cutinase to experiments in enhancing the biofuel industry because of its ability to facilitate transesterification of biofuels in various solubility environments. Rather unexpectedly, the ability to degrade the cutin layer of plants and their fruits holds the potential to be beneficial to the fruit industry. This is because the cuticle layer of fruits is a putative mechanism of water regulation, and the degradation of this layer subjects the fruits to water movement across its membrane. By using cutinase to degrade the cuticle of fruits, industry makers can enhance the drying of fruits and more easily deliver preservatives and additives to the flesh of the fruit. See also PETase References Further reading EC 3.1.1 Enzymes of unknown structure Protein domains
Cutinase
Biology
1,497
45,449,629
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Fluid%20Mechanics
Annual Review of Fluid Mechanics is a peer-reviewed scientific journal covering research on fluid mechanics. It is published once a year by Annual Reviews and the editors are Parviz Moin and Howard Stone. As of 2023, Annual Review of Fluid Mechanics is being published as open access, under the Subscribe to Open model. As of 2024, Journal Citation Reports gives the journal a 2023 impact factor of 25.4, ranking it first out of 40 journals in "Physics, Fluids and Plasmas" and first out of 170 journals in the category "Mechanics". History The Annual Review of Fluid Mechanics was first published in 1969 by the nonprofit publisher Annual Reviews. Its inaugural editor was William R. Sears. Taking after the Annual Review of Biochemistry, each volume typically begins with a prefatory chapter in which a notable scientist in the field reflects on their career and accomplishments. As of 2020, it was published both in print and electronically. Some of its articles are available online in advance of the volume's publication date. It defines its scope as covering significant developments in the field of fluid mechanics, including its history and foundations, non-newtonian fluids, rheology, incompressible and compressible flow, plasma flow, flow stability, multiphase flow, heat mixture and transport, control of fluid flow, combustion, turbulence, shock waves, and explosions. It is abstracted and indexed in Scopus, Science Citation Index Expanded, PASCAL, Inspec, GEOBASE, and Academic Search, among others. Editorial processes The Annual Review of Fluid Mechanics is helmed by the editor or the co-editors. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee. Editors of volumes Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death. William R. Sears (1969) Milton Van Dyke, Walter G. Vincenti, and John V. Wehausen (1970–1976) Van Dyke, Wehausen, and John L. Lumley (1977–1986) Van Dyke, Lumley, and Helen L. Reed (1987–2000) Lumley, Reed, and Stephen H. Davis (2001) Lumley, Davis, and Parviz Moin (2002) Davis and Moin (2003–2021) Moin and Howard A. Stone (2021-2025) Stone and Jonathan B. Freund (2025-) Current editorial committee As of 2022, the editorial committee consists of the co-editors and the following members: Jonathan B. Freund Dennice F. Gayme Anne Juel Daniel Livescu Beverley J. McKeon Geoff Vallis Roberto Zenit See also List of fluid mechanics journals References Academic journals established in 1969 English-language journals Fluid dynamics journals Annual journals Fluid Mechanics
Annual Review of Fluid Mechanics
Chemistry
729
4,599,105
https://en.wikipedia.org/wiki/Four%20causes
The four causes or four explanations are, in Aristotelian thought, categories of questions that explain "the why's" of something that exists or changes in nature. The four causes are the: material cause, the formal cause, the efficient cause, and the final cause. Aristotle wrote that "we do not have knowledge of a thing until we have grasped its why, that is to say, its cause." While there are cases in which classifying a "cause" is difficult, or in which "causes" might merge, Aristotle held that his four "causes" provided an analytical scheme of general applicability. Aristotle's word aitia () has, in philosophical scholarly tradition, been translated as 'cause'. This peculiar, specialized, technical, usage of the word 'cause' is not that of everyday English language. Rather, the translation of Aristotle's that is nearest to current ordinary language is "explanation." In Physics II.3 and Metaphysics V.2, Aristotle holds that there are four kinds of answers to "why" questions: Matter The material cause of a change or movement. This is the aspect of the change or movement that is determined by the material that composes the moving or changing things. For a table, this might be wood; for a statue, it might be bronze or marble. Form The formal cause of a change or movement. This is a change or movement caused by the arrangement, shape, or appearance of the thing changing or moving. Aristotle says, for example, that the ratio 2:1, and number in general, is the formal cause of the octave. Efficient, or agent The efficient or moving cause of a change or movement. This consists of things apart from the thing being changed or moved, which interact so as to be an agency of the change or movement. For example, the efficient cause of a table is a carpenter, or a person working as one, and according to Aristotle the efficient cause of a child is a parent. Final, end, or purpose The final cause of a change or movement. This is a change or movement for the sake of a thing to be what it is. For a seed, it might be an adult plant; for a sailboat, it might be sailing; for a ball at the top of a ramp, it might be coming to rest at the bottom. The four "causes" are not mutually exclusive. For Aristotle, several, preferably four, answers to the question "why" have to be given to explain a phenomenon and especially the actual configuration of an object. For example, if asking why a table is such and such, an explanation in terms of the four causes would sound like this: This table is solid and brown because it is made of wood (matter); it does not collapse because it has four legs of equal length (form); it is as it is because a carpenter made it, starting from a tree (agent); it has these dimensions because it is to be used by humans (end). Aristotle distinguished between intrinsic and extrinsic causes. Matter and form are intrinsic causes because they deal directly with the object, whereas efficient and finality causes are said to be extrinsic because they are external. Thomas Aquinas demonstrated that only those four types of causes can exist and no others. He also introduced a priority order according to which "matter is made perfect by the form, form is made perfect by the agent, and agent is made perfect by the finality." Hence, the finality is the cause of causes or, equivalently, the queen of causes. Definition of "cause" In his philosophical writings, Aristotle used the Greek word αἴτιον (aition), a neuter singular form of an adjective. The Greek word had meant, perhaps originally in a "legal" context, what or who is "responsible," mostly but not always in a bad sense of "guilt" or "blame." Alternatively, it could mean "to the credit of" someone or something. The appropriation of this word by Aristotle and other philosophers reflects how the Greek experience of legal practice influenced the concern in Greek thought to determine what is responsible. The word developed other meanings, including its use in philosophy in a more abstract sense. About a century before Aristotle, the anonymous author of the Hippocratic text On Ancient Medicine had described the essential characteristics of a cause as it is considered in medicine:We must, therefore, consider the causes of each [medical] condition to be those things which are such that, when they are present, the condition necessarily occurs, but when they change to another combination, it ceases. Aristotle's "four causes" Aristotle used the four causes to provide different answers to the question, "because of what?" The four answers to this question illuminate different aspects of how a thing comes into being or of how an event takes place. Material Aristotle considers the material "cause" () of an object as equivalent to the nature of the raw material out of which the object is composed. (The word "nature" for Aristotle applies to both its potential in the raw material and its ultimate finished form. In a sense this form already existed in the material: see potentiality and actuality.) Whereas modern physics looks to simple bodies, Aristotle's physics took a more general viewpoint, and treated living things as exemplary. Nevertheless, he argued that simple natural bodies such as earth, fire, air, and water also showed signs of having their own innate sources of motion, change, and rest. Fire, for example, carries things upwards, unless stopped from doing so. Things formed by human artifice, such as beds and cloaks, have no innate tendency to become beds or cloaks. In traditional Aristotelian philosophical terminology, material is not the same as substance. Matter has parallels with substance in so far as primary matter serves as the substratum for simple bodies which are not substance: sand and rock (mostly earth), rivers and seas (mostly water), atmosphere and wind (mostly air and then mostly fire below the moon). In this traditional terminology, 'substance' is a term of ontology, referring to really existing things; only individuals are said to be substances (subjects) in the primary sense. Secondary substance, in a different sense, also applies to man-made artifacts. Formal Aristotle considers the formal "cause" () as describing the pattern or form which when present makes matter into a particular type of thing, which we recognize as being of that particular type. By Aristotle's own account, this is a difficult and controversial concept. It links with theories of forms such as those of Aristotle's teacher, Plato, but in Aristotle's own account (see his Metaphysics), he takes into account many previous writers who had expressed opinions about forms and ideas, but he shows how his own views differ from them. Efficient Aristotle defines the agent or efficient "cause" () of an object as that which causes change and drives transient motion (such as a painter painting a house) (see Aristotle, Physics II 3, 194b29). In many cases, this is simply the thing that brings something about. For example, in the case of a statue, it is the person chiseling away which transforms a block of marble into a statue. According to Lloyd, of the four causes, only this one is what is meant by the modern English word "cause" in ordinary speech. Final Aristotle defines the end, purpose, or final "cause" () as that for the sake of which a thing is done. Like the form, this is a controversial type of explanation in science; some have argued for its survival in evolutionary biology, while Ernst Mayr denied that it continued to play a role. It is commonly recognised that Aristotle's conception of nature is teleological in the sense that Nature exhibits functionality in a more general sense than is exemplified in the purposes that humans have. Aristotle observed that a telos does not necessarily involve deliberation, intention, consciousness, or intelligence: According to Aristotle, a seed has the eventual adult plant as its end (i.e., as its telos) if and only if the seed would become the adult plant under normal circumstances. In Physics II.9, Aristotle hazards a few arguments that a determination of the end (i.e., final cause) of a phenomenon is more important than the others. He argues that the end is that which brings it about, so for example "if one defines the operation of sawing as being a certain kind of dividing, then this cannot come about unless the saw has teeth of a certain kind; and these cannot be unless it is of iron." According to Aristotle, once a final "cause" is in place, the material, efficient and formal "causes" follow by necessity. However, he recommends that the student of nature determine the other "causes" as well, and notes that not all phenomena have an end, e.g., chance events. Aristotle saw that his biological investigations provided insights into the causes of things, especially into the final cause: George Holmes Howison highlights "final causation" in presenting his theory of metaphysics, which he terms "personal idealism", and to which he invites not only man, but all (ideal) life: However, Edward Feser argues, in line with the Aristotelian and Thomistic tradition, that finality has been greatly misunderstood. Indeed, without finality, efficient causality becomes inexplicable. Finality thus understood is not purpose but that end towards which a thing is ordered. When a match is rubbed against the side of a matchbox, the effect is not the appearance of an elephant or the sounding of a drum, but fire. The effect is not arbitrary because the match is ordered towards the end of fire which is realized through efficient causes. In their biosemiotic study, Stuart Kauffman, Robert K. Logan et al. (2007) remark: Scholasticism In the Scholasticism, the efficient causality was governed by two principles: omne agens agit simile sibi (every agent produces something similar to itself): stated frequently in the writings of St. Thomas Aquinas, the principle establishes a relationship of similarity and analogy between cause and effect; nemo dat quod non habet (no one gives what he does not possess): partially similar to the legal principle of the same name, in Metaphysics it establishes that the cause cannot bestow on the effect the quantity of being (and thus of unity, truth, goodness, reality and perfection) that it does not already possess within itself. Otherwise, there would be creation out of nothingness of self and other-from-self In other words, the cause must possess a degree of reality greater than or equal to that of the effect. If it is greater, we speak of equivocal causation, in analogy to the three types of logical predication (univocal, equivocal, analogical); if it is equal, we speak of univocal predication. Thomas in this regard distinguished between causa fiendi (cause of occurring, of only beginning to be) and causa essendi (cause of being and also of beginning to be) When the being of the agent cause is in the effect in a lesser or equal degree, this is a causa fiendi. Furthermore, the second principle also establishes a qualitative link: the cause can only transmit its own essence to the effect. For example, a dog cannot transmit the essence of a feline to its young, but only that of a dog. The principle is equivalent to that of Causa aequat effectum (cause equals effect) in both a quantitative and qualitative sense. Modern science In his Advancement of Learning (1605), Francis Bacon wrote that natural science "doth make inquiry, and take consideration of the same natures : but how? Only as to the material and efficient causes of them, and not as to the forms." Using the terminology of Aristotle, Bacon demands that, apart from the "laws of nature" themselves, the causes relevant to natural science are only efficient causes and material causes, or, to use the formulation which became famous later, natural phenomena require scientific explanation in terms of matter and motion. In The New Organon, Bacon divides knowledge into physics and metaphysics: From the two kinds of axioms which have been spoken of arises a just division of philosophy and the sciences, taking the received terms (which come nearest to express the thing) in a sense agreeable to my own views. Thus, let the investigation of forms, which are (in the eye of reason at least, and in their essential law) eternal and immutable, constitute Metaphysics; and let the investigation of the efficient cause, and of matter, and of the latent process, and the latent configuration (all of which have reference to the common and ordinary course of nature, not to her eternal and fundamental laws) constitute Physics. And to these let there be subordinate two practical divisions: to Physics, Mechanics; to Metaphysics, what (in a purer sense of the word) I call Magic, on account of the broadness of the ways it moves in, and its greater command over nature. Biology Explanations in terms of final causes remain common in evolutionary biology. Francisco J. Ayala has claimed that teleology is indispensable to biology since the concept of adaptation is inherently teleological. In an appreciation of Charles Darwin published in Nature in 1874, Asa Gray noted "Darwin's great service to Natural Science" lies in bringing back teleology "so that, instead of Morphology versus Teleology, we shall have Morphology wedded to Teleology." Darwin quickly responded, "What you say about Teleology pleases me especially and I do not think anyone else has ever noticed the point." Francis Darwin and T. H. Huxley reiterate this sentiment. The latter wrote that "the most remarkable service to the philosophy of Biology rendered by Mr. Darwin is the reconciliation of Teleology and Morphology, and the explanation of the facts of both, which his view offers." James G. Lennox states that Darwin uses the term 'Final Cause' consistently in his Species Notebook, On the Origin of Species, and after. Contrary to Ayala's position, Ernst Mayr states that "adaptedness... is a posteriori result rather than an a priori goal-seeking." Various commentators view the teleological phrases used in modern evolutionary biology as a type of shorthand. For example, S. H. P. Madrell writes that "the proper but cumbersome way of describing change by evolutionary adaptation [may be] substituted by shorter overtly teleological statements" for the sake of saving space, but that this "should not be taken to imply that evolution proceeds by anything other than from mutations arising by chance, with those that impart an advantage being retained by natural selection." However, Lennox states that in evolution as conceived by Darwin, it is true both that evolution is the result of mutations arising by chance and that evolution is teleological in nature. Statements that a species does something "in order to" achieve survival are teleological. The validity or invalidity of such statements depends on the species and the intention of the writer as to the meaning of the phrase "in order to." Sometimes it is possible or useful to rewrite such sentences so as to avoid teleology. Some biology courses have incorporated exercises requiring students to rephrase such sentences so that they do not read teleologically. Nevertheless, biologists still frequently write in a way which can be read as implying teleology even if that is not the intention. Animal behaviour (Tinbergen's four questions) Tinbergen's four questions, named after the ethologist Nikolaas Tinbergen and based on Aristotle's four causes, are complementary categories of explanations for animal behaviour. They are also commonly referred to as levels of analysis. The four questions are on: function, what an adaptation does that is selected for in evolution; phylogeny, the evolutionary history of an organism, revealing its relationships to other species; mechanism, namely the proximate cause of a behaviour, such as the role of testosterone in aggression; and ontogeny, the development of an organism from egg to embryo to adult. Technology (Heidegger's four causes) In The Question Concerning Technology, echoing Aristotle, Martin Heidegger describes the four causes as follows: causa materialis: the material or matter causa formalis: the form or shape the material or matter enters causa finalis: the end causa efficiens: the effect that brings about the finished result. Heidegger explains that "[w]hoever builds a house or a ship or forges a sacrificial chalice reveals what is to be brought forth, according to the terms of the four modes of occasioning." The educationist David Waddington comments that although the efficient cause, which he identifies as "the craftsman," might be thought the most significant of the four, in his view each of Heidegger's four causes is "equally co-responsible" for producing a craft item, in Heidegger's terms "bringing forth" the thing into existence. Waddington cites Lovitt's description of this bringing forth as "a unified process." See also First cause Anthropic principle Biosemiotics Tinbergen's four questions Convergent evolution Five whys Four discourses, by Jacques Lacan Proximate and ultimate causation Socrates Teleology The purpose of a system is what it does Notes References Cohen, Marc S. "The Four Causes" (Lecture Notes) Accessed March 14, 2006. Falcon, Andrea. Aristotle on Causality (link to section labeled "Four Causes"). Stanford Encyclopedia of Philosophy 2008. Hennig, Boris. "The Four Causes." Journal of Philosophy 106(3), 2009, 137–160. Moravcsik, J.M. "Aitia as generative factor in Aristotle's philosophy." Dialogue, 14 : pp 622–638, 1975. English translation of Study on Phideas , by Pía Figueroa written with theme of Final Cause as per Aristotle. External links The Consequences of Ideas: Understanding the Concepts that Shaped Our World, By R. C. Sproul Aristotle on definition. By Marguerite Deslauriers, p. 81 Philosophy in the ancient world: an introduction. By James A. Arieti. p. 201. Doctrine of Being in the Aristotelian Metaphysics. By Joseph Owens and Etienne Gilson. Aitia as generative factor in Aristotle's philosophy A Compass for the Imagination, by Harold C. Morris. Philosophy thesis elaborates on Aristotle's Theory of the Four Causes. Washington State University, 1981. Philosophy of Aristotle Causality Concepts in metaphysics
Four causes
Physics
3,918
48,583,431
https://en.wikipedia.org/wiki/Conradson%20carbon%20residue
Conradson carbon residue, commonly known as "Concarbon" or "CCR", is a laboratory test used to provide an indication of the coke-forming tendencies of an oil. Quantitatively, the test measures the amount of carbonaceous residue remaining after the oil's evaporation and pyrolysis. In general, the test is applicable to petroleum products which are relatively non-volatile, and which decompose on distillation at atmospheric pressure. The phrase "Conradson carbon residue" and its common names can refer to either the test or the numerical value obtained from it. Test method A quantity of sample is weighed, placed in a crucible, and subjected to destructive distillation. During a fixed period of severe heating, the residue undergoes cracking and coking reactions . At the termination of the heating period, the crucible containing the carbonaceous residue is cooled in a desiccator and weighed. The residue remaining is calculated as a percentage of the original sample, and reported as Conradson carbon residue. Applications For burner fuel, Concarbon provides an approximation of the tendency of the fuel to form deposits in vaporizing pot-type and sleeve-type burners. For diesel fuel, Concarbon correlates approximately with combustion chamber deposits, provided that alkyl nitrates are absent, or if present, that the test is performed on the base fuel without additive. For motor oil, Concarbon was once regarded as indicative of the amount of carbonaceous deposits the oil would form in the combustion chamber of an engine. This is now considered to be of doubtful significance due to the presence of additives in many oils. For gas oil, Concarbon provides a useful correlation in the manufacture of gas there from. For delayed cokers, the Concarbon of the feed correlates positively to the amount of coke that will be produced. For fluid catalytic cracking units, the Concarbon of the feed can be used to estimate the feed's coke-forming tendency. See also Ramsbottom Carbon Residue References Petroleum technology Geochemical processes Petroleum industry
Conradson carbon residue
Chemistry,Engineering
418
17,875,920
https://en.wikipedia.org/wiki/APOBEC
APOBEC ("apolipoprotein B mRNA editing enzyme, catalytic polypeptide") is a family of evolutionarily conserved cytidine deaminases. Function A mechanism of generating protein diversity is mRNA editing. The APOBEC family of proteins perform mRNA modifications by deaminating cytidine bases to uracil. The N-terminal domain of APOBEC-like proteins is the catalytic domain, while the C-terminal domain is a pseudocatalytic domain. More specifically, the catalytic domain is a zinc dependent cytidine deaminase domain and is essential for cytidine deamination. The positively charged zinc ion in the catalytic domain attracts to the partial-negative charge of RNA. In the case of APOBEC-1, the mRNA transcript of intestinal apolipoprotein B is altered. RNA editing by APOBEC-1 requires homodimerization and this complex interacts with RNA-binding proteins to form the editosome. The resulting structure interacts with the codon CAA at codon 2153 and deaminates it into UAA, producing a stop codon that results in mRNA that is translated into the intestinal apoB-48 isoform. For other APOBEC-modified transcripts such as in the site-specific deamination of a CGA to a UGA stop codon in neurofibromatosis type 1 (NF1) mRNA, the resulting proteins are predicted to be truncated as well, although these transcripts are possibly degraded. C-to-U modifications do not always result in the truncation of proteins. For example, in humans/mammals they help protect from viral infections. APOBEC family proteins are widely expressed in cells of the human innate immune system. Cancer These enzymes, when misregulated, are a major source of mutation in numerous cancer types. When the expression of APOBEC family proteins is triggered, accidental mutations in somatic cells can lead to the development of oncogenes, cells which have the potential to develop into a tumor. APOBEC proteins are further expressed in attempt to regulate tumor formation. This makes APOBEC proteins a helpful marker for diagnosing malignant tumors. Structure A 2013 review discussed the structural and biophysical aspects of APOBEC3 family enzymes. Many of the APOBEC protein features are described in the widely studied APOBEC3G's page. Family members Human genes encoding members of the APOBEC protein family include: APOBEC1 APOBEC2 APOBEC3A APOBEC3B APOBEC3C APOBEC3D ("APOBEC3E" now refers to this) APOBEC3F APOBEC3G APOBEC3H APOBEC4 Activation-induced (cytidine) deaminase (AID) References EC 3.5.4
APOBEC
Chemistry
596
1,884,720
https://en.wikipedia.org/wiki/Benzyl%20chloroformate
Benzyl chloroformate, also known as benzyl chlorocarbonate or Z-chloride, is the benzyl ester of chloroformic acid. It can be also described as the chloride of the benzyloxycarbonyl (Cbz or Z) group. In its pure form it is a water-sensitive oily colorless liquid, although impure samples usually appear yellow. It possesses a characteristic pungent odor and degrades in contact with water. The compound was first prepared by Leonidas Zervas in the early 1930s who used it for the introduction of the benzyloxycarbonyl protecting group, which became the basis of the Bergmann-Zervas carboxybenzyl method of peptide synthesis he developed with Max Bergmann. This was the first successful method of controlled peptide chemical synthesis and for twenty years it was the dominant procedure used worldwide until the 1950s. To this day, benzyl chloroformate is often used for amine group protection. Preparation The compound is prepared in the lab by treating benzyl alcohol with phosgene: PhCH2OH + COCl2 → PhCH2OC(O)Cl + HCl Phosgene is used in excess to minimise the production of the carbonate (PhCH2O)2C=O. The use of phosgene gas in the lab preparation carries a very large health hazard, and has been implicated in the chronic pulmonary disease of pioneers in the usage of the compound such as Zervas. Amine protection Benzyl chloroformate is commonly used in organic synthesis for the introduction of the benzyloxycarbonyl (formerly called carboxybenzyl) protecting group for amines. The protecting group is abbreviated Cbz or Z (in honor of discoverer Zervas), hence the alternative shorthand designation for benzyl chloroformate as Cbz-Cl or Z-Cl. Benzyloxycarbonyl is a key protecting group for amines, suppressing the nucleophilic and basic properties of the N lone pair. This "reactivity masking" property, along with the ability to prevent racemization of Z-protected amines, made the Z group the basis of the Begmann-Zervas synthesis of oligopeptides (1932) where the following general reaction is performed to protect the N-terminus of a serially growing oligopeptide chain: This reaction was hailed as a "revolution" and essentially started the distinct field of synthetic peptide chemistry. It remained unsurpassed in utility for peptide synthesis until the early 1950s when mixed anhydride and active ester methodologies were developed. Although the reaction is no longer commonly used for peptides, it is nonetheless very widespread for amine protection in other applications within organic synthesis and total synthesis. Common procedures to achieve protection starting from benzyl chloroformate include: Benzyl chloroformate and a base, such as sodium carbonate in water at 0 °C Benzyl chloroformate and magnesium oxide in ethyl acetate at 70 °C to reflux Benzyl chloroformate, DIPEA, acetonitrile and scandium trifluoromethanesulfonate (Sc(OTf)3) Alternatively, the Cbz group can be generated by the reaction of an isocyanate with benzyl alcohol (as in the Curtius rearrangement). Deprotection Hydrogenolysis in the presence of a variety of palladium-based catalysts is the usual method for deprotection. Palladium on charcoal is typical. Alternatively, HBr and strong Lewis acids have been used, provided that a trap is provided for the released benzyl carbocation. When the protected amine is treated by either of the above methods (i.e. by catalytic hydrogenation or acidic workup), it yields a terminal carbamic acid which then readily decarboxylates to give the free amine. 2-Mercaptoethanol can also be used, in the presence of potassium phosphate in dimethylacetamide. References External links Chloroformates Benzyl esters Reagents for organic chemistry Foul-smelling chemicals
Benzyl chloroformate
Chemistry
883
71,138
https://en.wikipedia.org/wiki/Wave%20function%20collapse
In quantum mechanics, wave function collapse, also called reduction of the state vector, occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an observation and is the essence of a measurement in quantum mechanics, which connects the wave function with classical observables such as position and momentum. Collapse is one of the two processes by which quantum systems evolve in time; the other is the continuous evolution governed by the Schrödinger equation. While standard quantum mechanics postulates wave function collapse to connect quantum to classical models, some extension theories propose physical processes that cause collapse. The in depth study of quantum decoherence has proposed that collapse is related to the interaction of a quantum system with its environment. Historically, Werner Heisenberg was the first to use the idea of wave function reduction to explain quantum measurement. Mathematical description In quantum mechanics each measurable physical quantity of a quantum system is called an observable which, for example, could be the position and the momentum but also energy , components of spin (), and so on. The observable acts as a linear function on the states of the system; its eigenvectors correspond to the quantum state (i.e. eigenstate) and the eigenvalues to the possible values of the observable. The collection of eigenstates/eigenvalue pairs represent all possible values of the observable. Writing for an eigenstate and for the corresponding observed value, any arbitrary state of the quantum system can be expressed as a vector using bra–ket notation: The kets specify the different available quantum "alternatives", i.e., particular quantum states. The wave function is a specific representation of a quantum state. Wave functions can therefore always be expressed as eigenstates of an observable though the converse is not necessarily true. Collapse To account for the experimental result that repeated measurements of a quantum system give the same results, the theory postulates a "collapse" or "reduction of the state vector" upon observation, abruptly converting an arbitrary state into a single component eigenstate of the observable: where the arrow represents a measurement of the observable corresponding to the basis. For any single event, only one eigenvalue is measured, chosen randomly from among the possible values. Meaning of the expansion coefficients The complex coefficients in the expansion of a quantum state in terms of eigenstates , can be written as an (complex) overlap of the corresponding eigenstate and the quantum state: They are called the probability amplitudes. The square modulus is the probability that a measurement of the observable yields the eigenstate . The sum of the probability over all possible outcomes must be one: As examples, individual counts in a double slit experiment with electrons appear at random locations on the detector; after many counts are summed the distribution shows a wave interference pattern. In a Stern-Gerlach experiment with silver atoms, each particle appears in one of two areas unpredictably, but the final conclusion has equal numbers of events in each area. This statistical aspect of quantum measurements differs fundamentally from classical mechanics. In quantum mechanics the only information we have about a system is its wave function and measurements of its wave function can only give statistical information. Terminology The two terms "reduction of the state vector" (or "state reduction" for short) and "wave function collapse" are used to describe the same concept. A quantum state is a mathematical description of a quantum system; a quantum state vector uses Hilbert space vectors for the description. Reduction of the state vector replaces the full state vector with a single eigenstate of the observable. The term "wave function" is typically used for a different mathematical representation of the quantum state, one that uses spatial coordinates also called the "position representation". When the wave function representation is used, the "reduction" is called "wave function collapse". The measurement problem The Schrödinger equation describes quantum systems but does not describe their measurement. Solution to the equations include all possible observable values for measurements, but measurements only result in one definite outcome. This difference is called the measurement problem of quantum mechanics. To predict measurement outcomes from quantum solutions, the orthodox interpretation of quantum theory postulates wave function collapse and uses the Born rule to compute the probable outcomes. Despite the widespread quantitative success of these postulates scientists remain dissatisfied and have sought more detailed physical models. Rather than suspending the Schrödinger equation during the process of measurement, the measurement apparatus should be included and governed by the laws of quantum mechanics. Physical approaches to collapse Quantum theory offers no dynamical description of the "collapse" of the wave function. Viewed as a statistical theory, no description is expected. As Fuchs and Peres put it, "collapse is something that happens in our description of the system, not to the system itself". Various interpretations of quantum mechanics attempt to provide a physical model for collapse. Three treatments of collapse can be found among the common interpretations. The first group includes hidden-variable theories like de Broglie–Bohm theory; here random outcomes only result from unknown values of hidden variables. Results from tests of Bell's theorem shows that these variables would need to be non-local. The second group models measurement as quantum entanglement between the quantum state and the measurement apparatus. This results in a simulation of classical statistics called quantum decoherence. This group includes the many-worlds interpretation and consistent histories models. The third group postulates additional, but as yet undetected, physical basis for the randomness; this group includes for example the objective-collapse interpretations. While models in all groups have contributed to better understanding of quantum theory, no alternative explanation for individual events has emerged as more useful than collapse followed by statistical prediction with the Born rule. The significance ascribed to the wave function varies from interpretation to interpretation and even within an interpretation (such as the Copenhagen interpretation). If the wave function merely encodes an observer's knowledge of the universe, then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent. Quantum decoherence Quantum decoherence explains why a system interacting with an environment transitions from being a pure state, exhibiting superpositions, to a mixed state, an incoherent combination of classical alternatives. This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible in the same sense as in the second law of thermodynamics: the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining the classical limit of quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them. The form of decoherence known as environment-induced superselection proposes that when a quantum system interacts with the environment, the superpositions apparently reduce to mixtures of classical alternatives. The combined wave function of the system and environment continue to obey the Schrödinger equation throughout this apparent collapse. More importantly, this is not enough to explain actual wave function collapse, as decoherence does not reduce it to a single eigenstate. History The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik. Heisenberg did not try to specify exactly what the collapse of the wavefunction meant. However, he emphasized that it should not be understood as a physical process. Niels Bohr never mentions wave function collapse in his published work, but he repeatedly cautioned that we must give up a "pictorial representation". Despite the differences between Bohr and Heisenberg, their views are often grouped together as the "Copenhagen interpretation", of which wave function collapse is regarded as a key feature. John von Neumann's influential 1932 work Mathematical Foundations of Quantum Mechanics took a more formal approach, developing an "ideal" measurement scheme that postulated that there were two processes of wave function change: The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement (state reduction or collapse). The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation. In 1957 Hugh Everett III proposed a model of quantum mechanics that dropped von Neumann's first postulate. Everett observed that the measurement apparatus was also a quantum system and its quantum interaction with the system under observation should determine the results. He proposed that the discontinuous change is instead a splitting of a wave function representing the universe. While Everett's approach rekindled interest in foundational quantum mechanics, it left core issues unresolved. Two key issues relate to origin of the observed classical results: what causes quantum systems to appear classical and to resolve with the observed probabilities of the Born rule. Beginning in 1970 H. Dieter Zeh sought a detailed quantum decoherence model for the discontinuous change without postulating collapse. Further work by Wojciech H. Zurek in 1980 lead eventually to a large number of papers on many aspects of the concept. Decoherence assumes that every quantum system interacts quantum mechanically with its environment and such interaction is not separable from the system, a concept called an "open system". Decoherence has been shown to work very quickly and within a minimal environment, but as yet it has not succeeded in a providing a detailed model replacing the collapse postulate of orthodox quantum mechanics. By explicitly dealing with the interaction of object and measuring instrument, von Neumann described a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove the necessity of such a collapse. Von Neumann's projection postulate was conceived based on experimental evidence available during the 1930s, in particular Compton scattering. Later work refined the notion of measurements into the more easily discussed first kind, that will give the same value when immediately repeated, and the second kind that give different values when repeated. See also Arrow of time Interpretations of quantum mechanics Quantum decoherence Quantum interference Quantum Zeno effect Schrödinger's cat Stern–Gerlach experiment Wave function collapse (algorithm) References External links Concepts in physics Quantum measurement
Wave function collapse
Physics
2,254
41,966,652
https://en.wikipedia.org/wiki/NGC%20100
NGC 100 is a galaxy located approximately 60 million light-years from the Solar System in the constellation Pisces. It has an apparent magnitude of 13.2. It was first discovered on 10 November 1885 by American astronomer Lewis Swift. See also List of NGC objects (1–1000) Pisces (constellation) References External links SEDS 18851110 0100 0231 NGC 0100 Discoveries by Lewis Swift
NGC 100
Astronomy
85
58,041,927
https://en.wikipedia.org/wiki/NGC%203873
NGC 3873 is an elliptical galaxy located about 300 million light-years away in the constellation Leo. The galaxy was discovered by astronomer Heinrich d'Arrest on May 8, 1864. NGC 3873 is a member of the Leo Cluster. On May 15, 2007 a type Ia supernova designated as SN 2007ci was discovered in NGC 3873. See also NGC 3842 References External links 3873 36670 Leo (constellation) Leo Cluster Astronomical objects discovered in 1864 Elliptical galaxies 6735
NGC 3873
Astronomy
104
76,117,102
https://en.wikipedia.org/wiki/NAURA%20Technology%20Group
NAURA Technology Group (Naura; ) is a partially state-owned publicly listed Chinese company that manufactures semiconductor chip production equipment. It is currently the largest semiconductor equipment manufacturer in China. History In September 2001, Beijing Electronics Holdings, a government SASAC entity initiated the establishment of Beijing Sevenstar Electronics (Sevenstar Electronics). On 16 March 2010, the company held its initial public offering on the ChiNext of Shenzhen Stock Exchange. It was the biggest gainer for mainland China stocks on that day where its initial price of 33 yuan per share jumped 79% percent to 59 yuan. In 2016, Sevenstar Electronics acquired Beijing North Microelectronics (NMC) from its parent, Beijing Electronics Holdings. NMC specialized in Silicon etching and Physical vapor deposition (PVD) equipement. On 24 February 2017, after the company restructured following its acquisition, it was renamed to NAURA Technology Group. In January 2018, the Committee on Foreign Investment in the United States (CFIUS) approved Naura's purchase of Akrion Systems, a Pennsylvania-based rival that was a supplier of advanced wafer surface preparation technology. This was the first takeover of an American company by a Chinese one that was approved since Donald Trump became President of the United States. Prior to that the Trump administration had blocked all Chinese acquisitions of US target companies as a result of the China–United States trade war. In October 2022, Naura told its American employees in China to stop taking part in R&D activities to comply with the United States New Export Controls on Advanced Computing and Semiconductors to China. Naura stated its subsidiary, Beijing Naura Magnetoelectric Technology was on the Bureau of Industry and Security unverified list although it accounted for only 0.5% of the company's annual revenue. Its share price dropped 20% that week. A week later, US trade officials from American embassy in Beijing held talks with executives of Naura. In December 2022, Beijing Naura Magnetoelectric Technology was removed from the Bureau of Industry and Security unverified list after their bona fides were able to be verified. In February 2023, Yangtze Memory Technologies reduced its equipment purchase orders by 70% from Naura. The cancellation orders stated in October 2022 which was around the same time the US export controls came into effect. In January 2024, Naura stated it expected its 2023 revenue to increase by around half from a year earlier as its technology developments allowed it fulfil local demands and gain a greater market share in the country. In February 2024, Bloomberg News reported that Naura was one of the top investment picks among Wall Street firms such as Barclays and Sanford C. Bernstein. A company comparable to Applied Materials, it would be able to fulfill local market demands in China and fill the void left by foreign firms being unable to continue doing business due to geopolitical restrictions. In April 2024, it was reported that Naura was starting research on lithography systems. In September 2024, Taiwanese authorities accused eight mainland Chinese technology companies which included Naura of illegally poaching talent from Taiwan. Naura denied poaching local workers and stated its office in Taiwan operated in accordance with local laws and regulations. In December 2024, Naura was targeted in a new round of US export controls and added to the United States Department of Commerce's Entity List. Business lines Naura has four business lines: Semiconductors (plasma etching, PVD, CVD, oxidation/diffusion, cleaning system, and annealing) Vacuum technology (heat treatment, crystal growth and magnetic material) Lithium battery equipment Precision components (resistors, capacitors, crystal devices, and module power supplies) See also Semiconductor industry in China Applied Materials References External links 2001 establishments in China 2010 initial public offerings Companies based in Beijing Companies listed on the Shenzhen Stock Exchange Electronics companies established in 2001 Equipment semiconductor companies Government-owned companies of China Semiconductor companies of China Companies in the CSI 100 Index 2001 in Beijing
NAURA Technology Group
Engineering
817
9,141,957
https://en.wikipedia.org/wiki/Ground%20effect%20%28cars%29
In car design, ground effect is a series of effects which have been exploited in automotive aerodynamics to create downforce, particularly in racing cars. This has been the successor to the earlier dominant aerodynamic focus on streamlining. The international Formula One series and American racing IndyCars employ ground effects in their engineering and designs. Similarly, they are also employed in other racing series to some extent; however, across Europe, many series employ regulations (or complete bans) to limit its effectiveness on safety grounds. Theory In racing cars, a designer's aim is for increased downforce and grip to achieve higher cornering speeds. A substantial amount of downforce is available by understanding the ground to be part of the aerodynamic system in question, hence the name "ground effect". Starting in the mid-1960s, 'wings' were routinely used in the design of race cars to increase downforce (which is not a type of ground effect). Designers shifted their efforts at understanding air flow around the perimeter, body skirts, and undersides of the vehicle to increase downforce with less drag than compared to using a wing. This kind of ground effect is easily illustrated by taking a tarpaulin out on a windy day and holding it close to the ground: it can be observed that when close enough to the ground the tarp will be drawn towards the ground. This is due to Bernoulli's principle; as the tarp gets closer to the ground, the cross sectional area available for the air passing between it and the ground shrinks. This causes the air to accelerate and as a result pressure under the tarp drops while the pressure on top is unaffected, and together this results in a net downward force. The same principles apply to cars. The Bernoulli principle is not the only aspect of mechanics in generating ground-effect downforce. A large part of ground-effect performance comes from taking advantage of viscosity. In the tarp example above, neither the tarp nor the ground is moving. The boundary layer between the two surfaces works to slow down the air between them which lessens the Bernoulli effect. When a car moves over the ground, the boundary layer on the ground becomes helpful. In the reference frame of the car, the ground is moving backwards at some speed. As the ground moves, it pulls on the air above it and causes it to move faster. This enhances the Bernoulli effect and increases downforce. It is an example of Couette flow. While such downforce-producing aerodynamic techniques are often referred to with the catch-all term "ground effect", they are not strictly speaking a result of the same aerodynamic phenomenon as the ground effect which is apparent in aircraft at very low altitudes. History American Jim Hall developed and built his Chaparral cars around the principles of ground effects, pioneering them. His 1961 car attempted to use the shaped underside method but there were too many other aerodynamic problems with the car for it to work properly. His 1966 cars used a dramatic high wing for their downforce. His Chaparral 2J "sucker car" of 1970 was revolutionary. It had two fans at the rear of the car driven by a dedicated two-stroke engine; it also had "skirts", which left only a minimal gap between car and ground, to seal the cavity from the atmosphere. Although it did not win a race, some competition had lobbied for its ban, which came into place at the end of that year. Movable aerodynamic devices were banned from most branches of the sport. In 1968, the argentine designer and engineer, Heriberto Pronello, developed the Pronello Huayra-Ford for the Sport Prototipo Argentino category, making its first appearance in Córdoba for the 1969 season with Carlos Reutemann and Carlos Pascualini as drivers. During 1968, a 1/5 scale model was made, which was tested in the wind tunnel of the Fábrica Militar de Aviones (FMA) usually employed by the Argentine Air Force, demonstrating the functionality of the ground effect at that scale. In 2023, the Pronello Huayra chassis #002 was invited to the Goodwood Festival Of Speed. During its stay in England, the car was taken to the Catesby tunnel, where a complete aerodynamic analysis was carried out by the argentine engineer and professor Sergio Rinland. "We always thought it had ground effect... When Heriberto tested it at the National University of Córdoba, he verified its air resistance with a 1/5 scale model that was perfect, without door and hood openings, without the intake turrets..." Rinland said. “The tests we did in the Catesby Tunnel demonstrated its great aerodynamic efficiency: we obtained a Cx 0.25 with the short tail and a Cx 0.23 with the long tail, which it used on the fastest circuits. Almost, almost what Heriberto had measured at the time” “It has a slippery upper shape and a flat floor with a diffuser that gave it quite an edge in its day. The diffuser has an expansion ratio that puts it staggeringly close to the maximum downforce you can get from a diffuser. The car was at the tunnel with pressure tapings added to it, in order to look at the pressure distribution around the car which looks to completely confirm that it works exactly as the designer expected.”, explained Willem Toet. These tests were carried out with and without the "long tail" which was used for high-speed circuits, with the vehicle propelled by its own means, at working temperature, returning consistent and repeatable results. Formula One was the next setting for ground effect in racing cars. Several Formula One designs came close to the ground-effect solution which would eventually be implemented by Lotus. In 1968 and 1969, Tony Rudd and Peter Wright at British Racing Motors (BRM) experimented on track and in the wind tunnel with long aerodynamic section side panniers to clean up the turbulent airflow between the front and rear wheels. Both left the team shortly after and the idea was not taken further. Robin Herd at March Engineering, on a suggestion from Wright, used a similar concept on the 1970 March Formula One car. In both cars the sidepods were too far away from the ground for significant ground effect to be generated, and the idea of sealing the space under the wing section to the ground had not yet been developed. At about the same time, Shawn Buckley began his work in 1969 at the University of California, Berkeley on undercar aerodynamics sponsored by Colin Chapman, founder of Formula One Lotus. Buckley had previously designed the first high wing used in an IndyCar, Jerry Eisert's "Bat Car" of the 1966 Indianapolis 500. By proper shaping of the car's underside, the air speed there could be increased, lowering the pressure and pulling the car down onto the track. His test vehicles had a Venturi-like channel beneath the cars sealed by flexible side skirts that separated the channel from above-car aerodynamics. He investigated how flow separation on the undersurface channel could be influenced by boundary layer suction and divergence parameters of the underbody surface. Later, as a mechanical engineering professor at MIT, Buckley worked with Lotus developing the Lotus 78. On a different tack, Brabham designer Gordon Murray used air dams at the front of his Brabham BT44s in 1974 to exclude air from flowing under the vehicle. Upon discovering that these tended to wear away with the pitching movement of the car, he placed them further back and discovered that a small area of negative pressure was formed under the car, generating a useful amount of downforce - around . McLaren produced similar underbody details for their McLaren M23 design. In 1977 Rudd and Wright, now at Lotus, developed the Lotus 78 'wing car', based on a concept from Lotus owner and designer Colin Chapman. Its sidepods, bulky constructions between front and rear wheels, were shaped as inverted aerofoils and sealed with flexible "skirts" to the ground. The design of the radiators, embedded into the sidepods, was partly based on that of the de Havilland Mosquito aircraft. The team won five races that year, and two in 1978 while they developed the much improved Lotus 79. The most notable contender in 1978 was the Brabham-Alfa Romeo BT46B Fancar, designed by Gordon Murray. Its fan, spinning on a horizontal, longitudinal axis at the back of the car, took its power from the main gearbox. The car avoided the sporting ban by claims that the fan's main purpose was for engine cooling, as less than 50% of the airflow was used to create a depression under the car. It raced just once, with Niki Lauda winning at the 1978 Swedish Grand Prix. The car's advantage was proven after the track became oily. While other cars had to slow, Lauda was able to accelerate over the oil due to the tremendous downforce which rose with engine speed. The car was also observed to squat when the engine was revved at a standstill. Brabham's owner, Bernie Ecclestone, who had recently become president of the Formula One Constructors Association, reached an agreement with other teams to withdraw the car after three races. However the Fédération Internationale de l'Automobile (FIA), governing body of Formula One and many other motorsport series, decided to ban 'fan cars' with almost immediate effect. The Lotus 79, on the other hand, went on to win six races and the world championship for Mario Andretti and gave teammate Ronnie Peterson a posthumous second place, demonstrating just how much of an advantage the cars had. In the following years other teams copied and improved on the Lotus until cornering speeds became dangerously high, resulting in several severe accidents in 1982; flat undersides became mandatory for 1983. Part of the danger of relying on ground effects to corner at high speeds is the possibility of the sudden removal of this force; if the underside of the car contacts the ground, the flow is constricted too much, resulting in almost total loss of any ground effects. If this occurs in a corner where the driver is relying on this force to stay on the track, its sudden removal can cause the car to abruptly lose most of its traction and skid off the track. After a forty-year ban, ground effect returned to Formula 1 in 2022 under the latest set of regulation changes. The effect was used in its most effective form in IndyCar designs. IndyCars did not use ground effect as substantially as Formula One. For example, they lacked the use of skirts to seal off the underbody of the car. IndyCars also rode higher than ground effect F1 cars and relied on wings for significant downforce as well, creating an effective balance between over the car downforce and ground effect. Porpoising "Porpoising" is a term commonly used to describe a particular fault encountered in ground-effect racing cars. Racing cars had only been using their bodywork to generate downforce for just over a decade when Colin Chapman's Lotus 78 and 79 cars demonstrated that ground effect was the future in Formula One, so, at this point, under-car aerodynamics were still very poorly understood. To compound this problem the teams that were very keen to pursue ground effects tended to be the more poorly funded British "garagista" teams, who had little money to spare for wind tunnel testing, and tended simply to mimic the front-running Lotuses (including the Kauhsen and Merzario teams). This led to a generation of cars that were designed as much by hunch as by any great knowledge of the finer details, making them extremely pitch-sensitive. As the centre of pressure on the sidepod aerofoils moved about depending on the car's speed, attitude, and ground clearance, these forces interacted with the car's suspension systems, and the cars began to resonate, particularly at slow speeds, rocking back and forth - sometimes quite violently. Some drivers were known to complain of sea-sickness. This rocking motion, like a porpoise diving into and out of the sea as it swims at speed, gives the phenomenon its name. These characteristics, combined with a rock-hard suspension, resulted in the cars giving an extremely unpleasant ride. Ground effects were largely banned from Formula One in the early 1980s until 2022, but Group C sportscars and other racing cars continued to suffer from porpoising until better knowledge of ground effects allowed designers to minimise the problem. At the first pre-season test in Barcelona ahead of the 2022 Formula One World Championship, George Russell said extreme porpoising could lead to safety issues and later stated he was suffering from chest pain due to extreme porpoising during the 2022 Emilia Romagna Grand Prix. At the 2022 Azerbaijan Grand Prix, Lewis Hamilton struggled to get out of the car after the race due to violent porpoising. See also Automotive aerodynamics Formula One car Ground effect in aircraft Ground-effect train Venturi effect References External links Photoessayist.com: The Chaparral 2J VintageRPM: Chaparral history 8W: Brabham-Alfa BT46B "fan car" Dennis David: Lotus 79 Aerodynamics Motorsport terminology Vehicle dynamics
Ground effect (cars)
Chemistry,Engineering
2,700
15,592,339
https://en.wikipedia.org/wiki/Data%20binding
In computer programming, data-binding is a general technique that binds data sources from the provider and consumer together and synchronizes them. This is usually done with two data/information sources with different languages, as in XML data binding and UI data binding. In UI data binding, data and information objects of the same language, but different logic function are bound together (e.g., Java UI elements to Java objects). In a data binding process, each data change is reflected automatically by the elements that are bound to the data. The term data binding is also used in cases where an outer representation of data in an element changes, and the underlying data is automatically updated to reflect this change. As an example, a change in a TextBox element could modify the underlying data value. Data binding frameworks and tools List of examples of data binding frameworks and tools for different programming languages: C# .NET Windows Presentation Foundation (WPF) Blazor Windows Forms MAUI Delphi DSharp third-party data binding tool OpenWire Visual Live Binding—third-party visual data binding tool LiveBindings Java Google Web Toolkit JavaFX Eclipse JavaScript Objective-C AKABeacon iOS Data Binding framework Swift SwiftUI Scala Binding.scala See also XML data binding UI data binding References Further reading Data management
Data binding
Technology
270
12,464,813
https://en.wikipedia.org/wiki/Memory%20ProteXion
For computer memory, Memory ProteXion, found in IBM xSeries servers, is a form of "redundant bit steering". This technology uses redundant bits in a data packet to recover from a DIMM failure. Memory ProteXion is different from normal ECC error correction in that it uses only 6 bits for ECC, leaving 2 bits behind. These 2 bits can be used to re-route data from failed memory, much like hot spare on a RAID. The ECC is used to reconstruct the data, and the extra bits to store it. Memory ProteXion, also known as “redundant bit steering”, is the technology behind using redundant bits in a data packet to provide backup in the event of a DIMM failure. One failure does not cause a predictive failure analysis to be issued on the DIMM, but 2 failures and more will issue a PFA to inform the system administrator that a replacement is needed. See also Chipkill External links Memory ProteXion Computer memory Error detection and correction
Memory ProteXion
Technology,Engineering
213
66,004,557
https://en.wikipedia.org/wiki/Cora%20G.%20Burwell
Cora Gertrude Burwell (June 25, 1883 – June 20, 1982) was an American astronomical researcher specialized in stellar spectroscopy. She was based at Mount Wilson Observatory from 1907 to 1949. Early life Cora Gertrude Burwell was born in Massachusetts and raised in Stafford Springs, Connecticut. She graduated from Mount Holyoke College in 1906 and was active in Holyoke alumnae activities in the Los Angeles area. Career In July, 1907, Burwell was appointed to a "human computer" position at Mount Wilson Observatory. In 1910, she attended the fourth conference of the International Union for Cooperation in Solar Research, when it was held at Mount Wilson. Burwell specialized in stellar spectroscopy. She was solo author on some scientific publications, and co-authored several others (some of which she was lead author), with notable collaborators including Dorrit Hoffleit, Henrietta Swope, Walter S. Adams, and Paul W. Merrill. With Merrill she compiled several catalogs of Be stars, in 1933, 1943, 1949, and 1950. She also helped to tend the Mount Wilson Observatory Library. She retired from the observatory in 1949, but continued speaking about astronomy to community groups. She also published a book of poetry, Neatly Packed. Personal life Cora Burwell lived in Pasadena, and later in Monrovia with her sister, Priscilla Burwell. She died in 1982, two days before her 99th birthday, in Los Angeles. References 1883 births 1982 deaths 20th-century American women scientists Human computers Mount Holyoke College alumni American women astronomers People from Stafford Springs, Connecticut Scientists from Massachusetts Scientists from Connecticut 20th-century American astronomers Spectroscopists
Cora G. Burwell
Technology
337
59,143,410
https://en.wikipedia.org/wiki/Uncinocarpus%20uncinatus
Uncinocarpus uncinatus is a species of microfungi that grows on dung and other keratinous materials such as bone. It was the second species to be designated as part of the genus Uncinocarpus. The species was first described by Randolph S. Currah in 1985; synonyms include Myxotrichum uncinatum and Gymnoascus uncinatus. Morphology In culture, colonies of U. uncinatus are yellow to orange-brown to red-brown in colour, growing paler towards the margin. Like other members of Uncinocarpus, it develops hooked and occasionally spiralling (uncinate) appendages which typically, but not always, possess spore-bearing structures (gymnothecia). The appendages of U. uncinatus are thick and wide to the distal end, unlike that of U. reesii, which taper to a point. References Onygenales Fungi described in 1985 Fungus species
Uncinocarpus uncinatus
Biology
204
46,530,282
https://en.wikipedia.org/wiki/Robotaxi
A robotaxi, also known as robot taxi, robo-taxi, self-driving taxi or driverless taxi, is an autonomous car (SAE automation level 4 or 5) operated for a ridesharing company. Some studies have hypothesized that robotaxis operated in an autonomous mobility on demand (AMoD) service could be one of the most rapidly adopted applications of autonomous cars at scale and a major mobility solution, especially in urban areas. Moreover, they could have a very positive impact on road safety, traffic congestion and parking. Robotaxis could also reduce urban pollution and energy consumption, since these services will most probably use electric cars and for most of the rides, less vehicle size and range is necessary compared to individually owned vehicles. The expected reduction in number of vehicles means less embodied energy; however energy consumption for redistribution of empty vehicles must be taken into account. Robotaxis would reduce operating costs by eliminating the need for a human driver, which might make it an affordable form of transportation and increase the popularity of transportation-as-a-service (TaaS) as opposed to individual car ownership. Such developments could lead to job destruction and new challenges concerning operator liabilities. In 2023, some robotaxis caused congestion when they blocked roads due to lost cellular connectivity, and others failed to properly yield to emergency vehicles. there has been only one fatality associated with a robotaxi, a pedestrian who was hit by an Uber test vehicle in 2018. Predictions of the widespread and rapid introduction of robotaxis – by as early as 2018 – have not been realized. There are a number of trials underway in cities around the world, some of which are open to the public and generate revenue. However, as of 2021, questions have been raised as to whether the progress of self-driving technology has stalled and whether issues of social acceptance, cybersecurity and cost have been addressed. Status Vehicle costs So far all the trials have involved specially modified passenger cars with space for two or four passengers sitting in the back seats behind a partition. LIDAR, cameras and other sensors have been used on all vehicles. The cost of early vehicles was estimated in 2020 at up to US$400,000 due to custom manufacture and specialized sensors. However, the prices of some components such as LIDAR have fallen significantly. In January 2021, Waymo stated its costs were approximately $180,000 per vehicle, and its operating cost at $0.30 per mile (~$0.19 per km), well below Uber and Lyft, but this excludes the cost of fleet technicians and customer support. Baidu announced in June 2021 it would start producing robotaxis for 500,000 yuan ($77,665) each. Tesla has discussed a sub-$25,000 Tesla Robotaxi, and as of 2023 is designing an assembly line that will accommodate the vehicle. Passenger tests Several companies are testing robotaxi services, especially in the United States and in China. All operate only in a geo-fenced area. Service areas for robotaxis, also known as the Objective Design Domain (ODD), are specially designated zones where robotaxis can safely provide service. As of 2024, Baidu's Apollo Go had carried the most passengers, over 6 million by April 2024. Other providers in China include AutoX, DiDi, Pony.ai, WeRide, all operating in 10 or more cities. In the US, Waymo is the most prominent provider, operating in San Francisco, Phoenix, and Los Angeles. A 2024 study of Waymo indicated an 85% reduction in injury crashes per mile driven. Separate to these efforts have been trials of larger shared autonomous vehicles on fixed routes with designated stops, able to carry between 6 and 10 passengers. These shuttle buses operate at low speeds. Current obstacles to robotaxi At present, it is not only technical issues that hinder the widespread use of robotaxi, but also social issues. First, consumers' concerns about the reliability and safety of self-driving taxis are a major obstacle. For example, system failures during the service process and the risk of accident perception will reduce potential users. In addition, consumers still have doubts about whether robotaxi can cope with complex urban environments or severe weather conditions. Licenses In February 2018 Arizona granted Waymo a Transportation Network Company permit. In February 2022 the California Public Utilities Commission (CPUC) issued Drivered Deployment permits to Cruise and Waymo to allow passenger service in autonomous vehicles with a safety driver present in the vehicle. These carriers must hold a valid California Department of Motor Vehicles (DMV) Deployment permit and meet the requirements of the CPUC Drivered Deployment program. In June 2022, Cruise received approval to operate a commercial robotaxi service in San Francisco. In April 2022, China gave Baidu and Pony.ai its first permits to deploy robotaxis without safety drivers on open roads within a 23 square mile area in the Beijing Economic-Technological Development Area. In August 2023, the CPUC approved granting additional operating authority for Cruise LLC and Waymo LLC to conduct commercial passenger service using vehicles without safety drivers in San Francisco. The approval includes the ability for both companies to charge fares for rides at any time of day. History First trials In August 2016, MIT spinoff NuTonomy was the first company to make robotaxis available to the public, starting to offer rides with a fleet of 6 modified Renault Zoes and Mitsubishi i-MiEVs in a limited area in Singapore. NuTonomy later signed three significant partnerships to develop its robotaxi service: with Grab, Uber’s rival in Southeast Asia, with Groupe PSA, which is supposed to provide the company with Peugeot 3008 SUVs and the last one with Lyft to launch a robotaxi service in Boston. In August 2017, Cruise Automation, a self-driving startup acquired by General Motors in 2016, launched the beta version of a robotaxi service for its employees in San Francisco using a fleet of 46 Chevrolet Bolt EVs. Testing and revenue service timeline Trials listed have a safety driver unless otherwise indicated. The commencement of a trial does not mean it is still active. August 2016 - NuTonomy launched its autonomous taxi service using a fleet of 6 modified Renault Zoes and Mitsubishi i-MiEVs in Singapore September 2016 - Uber started allowing a select group of users in Pittsburgh, Pennsylvania to order robotaxis from a fleet of 14 vehicles. Two Uber engineers were always in the front seats of each vehicle. March 2017 - An Uber self-driving car was hit and flipped on its side by another vehicle that failed to yield. In October 2017, Uber started using only one test driver. April 2017 - Waymo started a large scale robotaxi tests in a geo-fenced suburb of Phoenix, Arizona with a driver monitoring each vehicle. The service area was about . In November 2017 some testing without drivers began. Commercial operations began in November 2019. August 2017 - Cruise Automation launched the beta version robotaxi service for 250 employees (10% of its staff) in San Francisco using a fleet of 46 vehicles. March 2018 - A woman attempting to cross a street in Tempe, Arizona at night was struck and killed by an Uber vehicle while the onboard safety driver was watching videos. Uber later restarted testing, but only during daylight hours and at slower speeds. August 2018 - Yandex began a trial with two vehicles in Innopolis, Russia December 2018 - Waymo started self-driving taxi service, dubbed Waymo One, in Arizona for paying customers. April 2019 - Pony.ai launched a pilot system covering in Guangzhou, China for employees and invited affiliated, serving pre-defined pickup points. November 2019 - WeRide RoboTaxi began a pilot service with 20 vehicles in Guangzhou and Huangpu over an area of November 2019 - Pony.ai started a three-month trial in Irvine, California with 10 cars and stops for pickup and drop off. April 2020 - Baidu opened its trial of 45 vehicles in Changsha, China to public users for free trips, serving 100 designated spots on a set network. Services operation from 9:20am to 4:40pm with a safety-driver and a "navigator", allowing space for two passengers in the back. June 2020 - DiDi robotaxi service begins operation in Shanghai in an area that covers Shanghai's Automobile Exhibition Center, the local business districts, subway stations and hotels in the downtown area. August 2020. Baidu began offering free trips, with app bookings, on its trial in Cangzhou, China which serves 55 designated spots over pre-defined routes. December 2020. AutoX (which is backed by Alibaba Group) launched a non-public trial of driverless robotaxis in Shenzhen with 25 vehicles. The service was then opened to the public in January 2021. February 2021 - Waymo One began limited robotaxi service in a number of suburbs of San Francisco for a selection of its own employees. In August 2021 the public was invited to apply to use service, with places limited. A safety driver is present in each vehicles. The number of vehicles involved has not been disclosed. May 2021 - Baidu commences a commercial robotaxi service with ten Apollo Go vehicles in a area with eight pickup and drop-off stops, in Shougang Park in western Beijing July 2021 - Baidu opened a pilot program to the public in Guangzhou with a fleet of 30 sedans serving in the Huangpu district. 200 designated spots are served between 9:30am and 11pm every day. July 2021 - DeepRoute.ai began a free-of-charge trial with 20 vehicles in downtown Shenzhen serving 100 pickup and dropoff locations. February 2022 - Cruise opened up its driverless cars in San Francisco to the public. February 2023 - Zoox, the self-driving startup owned by Amazon, carried passengers in its robotaxi for the first time in Foster City, California. August 2023 - Waymo and Cruise were authorized by the CPUC to collect fares for driverless rides in San Francisco. December 2023, China finalized regulations on commercial robotaxi operation. Roboshuttles or robotrucks are required to maintain in-car drivers. Robotaxis can use remote operators. The robotaxi:remote operator ratio cannot exceed 3:1. Operators must be certified. Accident reporting rules specify required data. April 2024, Baidu Apollo, AutoX, Pony.ai, Didi and WeRide each operated in 10 to 25 cities, with fleets hundreds of robotaxis. Baidu Apollo had traveled over without a major accident. July 2024 - In Wuhan, Baidu's Apollo Go robotaxis's attempts at commercialisation have received massive attention from the social media. Its low price (Base fares start as low as 4 yuan/55 cents, compared with 18 yuan/2.48 dollar for a taxi driven by a human) was supported by some. Meanwhile, the rapid adoption of the driverless taxis has rattled China' s gig economy workforce. However, their popularity boosted Baidu's shares. August 2024 - In most areas of Wuhan, Baidu’s Apollo Go robotaxis now operate fully autonomously without any safety personnel on board. The company recorded 899,000 rides in the second quarter of 2024, bringing the total number of rides to 7 million as of July 28, 2024. Notable commercial ventures Uber Advanced Technology Group Uber began development of self-driving vehicles in early 2015. In September 2016, the company started a trial allowing a select group of users of its ride-hailing service in Pittsburgh to order robotaxis from a fleet of 14 modified Ford Fusions. The test extended to San Francisco with modified Volvo XC90s before being relocated to Tempe, Arizona in February 2017. In March 2017, one of Uber's robotaxis crashed in self-driving mode in Arizona, which led the company to suspend its tests before resuming them a few days later. In March 2018, Uber paused self-driving vehicle testing after the death of Elaine Herzberg in Tempe, Arizona, a pedestrian struck by an Uber vehicle while attempting to cross the street, while the onboard engineer was watching videos. Uber settled with the victim's family. In January 2021, Uber sold its self driving division, Advanced Technologies Group (ATG), to Aurora Innovation for $4 billion while also investing $400 million into Aurora for a 26% ownership stake. Waymo In early 2017, Waymo, the Google self-driving car project which became an independent company in 2016, started a large public robotaxi test in Phoenix using 100 and then 500 more Chrysler Pacifica Hybrid minivans provided by Fiat Chrysler Automobiles as part of a partnership between the two companies. Waymo also signed a deal with Lyft to collaborate on self-driving cars in May 2017. In November 2017, Waymo revealed it had begun to operate some of its automated vehicles in Arizona without a safety driver behind the wheel. And in December 2018, Waymo started self-driving taxi service, dubbed Waymo One, in Arizona for paying customers. By November 2019, the service was operating autonomous vehicles without a safety backup driver. The autonomous taxi service was operating in San Francisco as of 2021. In December 2022, the company applied for a permit to operating self-driving taxi rides in California without a human operator present as backup. Baidu Apollo In September 2019, Baidu's autonomous driving unit Apollo launched Apollo Go robotaxi service, with an initial fleet of 45 autonomous vehicles. Apollo Go has since expanded to more than 10 Chinese cities. In August 2022, Baidu achieved a landmark victory in the race for autonomous vehicles by securing the first permits in China to deploy fully driverless taxis in the cities of Wuhan and Chongqing. In May 2024, Baidu unveiled the Apollo ADFM, claimed to be the world's first Level 4 autonomous driving foundation model, along with the sixth-generation Apollo Go robotaxi, which can be produced for under $30,000. The company also said by April 2024, Apollo had accumulated over 100 million kilometers of autonomous driving without major accidents. In August 2024, Apollo Go has deployed 400 robotaxis operating fully autonomously without any safety personnel on board in Wuhan, offering 24/7 service to 9 million residents. Baidu aims for Apollo Go to achieve operational unit breakeven in Wuhan by the end of 2024. GM Cruise In January 2020, GM subsidiary Cruise exhibited the Cruise Origin, a Level 4–5 driverless vehicle, intended to be used for a ride hailing service. In February 2022, Cruise started driverless taxi service in San Francisco. Also in February 2022, Cruise petitioned U.S. regulators (NHTSA) for permission to build and deploy a self-driving vehicle without human controls. , the petition is pending. In April 2022, their partner Honda unveiled its Level 4 mobility service partners to roll out in central Tokyo in the mid-2020s using the Cruise Origin. Unfortunately, there are signs that autonomously operated Cruise vehicles may interfere with emergency vehicles, and has been culpable of at least one collision with a fire truck. On 2 October 2023, a Cruise vehicle operating autonomously (without driver supervision) collided with a pedestrian. Instead of stopping immediately, the vehicle misidentified the collision mechanics and presumed it was crashed into from the side. Consequently, the vehicle proceeded to drag the pedestrian under the car for until it came to a stop on the side of the road. As both the response of the vehicle was deemed unacceptable and the company appears to have withheld details of the crash from regulators, California regulators revoked the license to operate these cars. Cruise recalled all of its 950 vehicles in November 2023. These decisions were enacted in parallel with the exposure of safety risks, identified earlier within the Cruise company, regarding proper vehicle behavior around children and around construction sites. Tesla Since 2019, Tesla's CEO Elon Musk has incorrectly predicted each year that Tesla would have robotaxis on the road within 1 to 2 years. He was expected to announce the plans for Tesla's robotaxi on 8 August 2024, but the event was moved to 10 October 2024. During that event Tesla demonstrated two new vehicles, the two-seater Tesla Cybercab and the 14-seater (plus standing room) Tesla Robovan, which can carry up to 20 passengers. The company also reiterated that all of their other models of cars and pickup trucks would be usable as robotaxis after a software update and regulatory approval, which they expected at the earliest in California and Texas in 2025. Other developments Many automakers announced their plans in 2015–2018 to develop robotaxis before 2025 and specific partnerships have been signed between automakers, technology providers and service operators, including: The startup Zoox announcing in 2015 its ambition to build a robotaxi from scratch. BMW and Fiat Chrysler Automobiles partnering in 2016 with Intel and Mobileye to develop robotaxis by 2021. Baidu partnering in 2016 with Nvidia to develop autonomous cars and robotaxis. Daimler AG teaming up with Bosch in 2017 to develop the software for a robotaxi service by 2025. The Renault-Nissan-Mitsubishi Alliance partnering in 2017 with Transdev and DeNA to develop robotaxi services within 10 years. Honda releasing in 2017 an autonomous concept car, NeuV, that aims at being a personal robotaxi. Ford Motor's plan in 2017 to develop a robotaxi by 2021 through partnerships with several startups. Ford Motor investing $1 billion in the startup Argo AI in 2017 to develop autonomous cars and robotaxis; the startup was disbanded in 2022 by Ford. Lyft and Ford partnering in 2017 to add Ford's self-driving cars to Lyft's ride-hailing network; Google leading a $1 billion investment in 2017 in Lyft which could support Waymo's robotaxi strategy; in 2021, Lyft's self-driving division was sold to Toyota. Delphi buying the startup NuTonomy for $400 million in 2017. Parsons Corporation announcing in 2017 a partnership with automated mobility operating system company Renovo.auto to deploy and scale AMoD services. Didi Chuxing partnering in 2018 with the Renault-Nissan-Mitsubishi Alliance and other automakers to explore the future launch of robotaxi services in China. See also Self-driving car References Automotive technologies Robotics Transport culture Taxis
Robotaxi
Physics,Engineering
3,798
7,045
https://en.wikipedia.org/wiki/Concorde
Concorde () is a retired Anglo-French supersonic airliner jointly developed and manufactured by Sud Aviation (later Aérospatiale) and the British Aircraft Corporation (BAC). Studies started in 1954, and France and the United Kingdom signed a treaty establishing the development project on 29 November 1962, as the programme cost was estimated at £70 million (£ in ). Construction of the six prototypes began in February 1965, and the first flight took off from Toulouse on 2 March 1969. The market was predicted for 350 aircraft, and the manufacturers received up to 100 option orders from many major airlines. On 9 October 1975, it received its French Certificate of Airworthiness, and from the UK CAA on 5 December. Concorde is a tailless aircraft design with a narrow fuselage permitting 4-abreast seating for 92 to 128 passengers, an ogival delta wing and a droop nose for landing visibility. It is powered by four Rolls-Royce/Snecma Olympus 593 turbojets with variable engine intake ramps, and reheat for take-off and acceleration to supersonic speed. Constructed out of aluminium, it was the first airliner to have analogue fly-by-wire flight controls. The airliner had transatlantic range while supercruising at twice the speed of sound for 75% of the distance. Delays and cost overruns increased the programme cost to £1.5–2.1 billion in 1976, (£– in ). Concorde entered service on 21 January 1976 with Air France from Paris-Roissy and British Airways from London Heathrow. Transatlantic flights were the main market, to Washington Dulles from 24 May, and to New York JFK from 17 October 1977. Air France and British Airways remained the sole customers with seven airframes each, for a total production of twenty. Supersonic flight more than halved travel times, but sonic booms over the ground limited it to transoceanic flights only. Its only competitor was the Tupolev Tu-144, carrying passengers from November 1977 until a May 1978 crash, while a potential competitor, the Boeing 2707, was cancelled in 1971 before any prototypes were built. On 25 July 2000, Air France Flight 4590 crashed shortly after take-off with all 109 occupants and four on the ground killed. This was the only fatal incident involving Concorde; commercial service was suspended until November 2001. The surviving aircraft were retired in 2003, 27 years after commercial operations had begun. All but 2 of the 20 aircraft built have been preserved and are on display across Europe and North America. Development Early studies In the early 1950s, Arnold Hall, director of the Royal Aircraft Establishment (RAE), asked Morien Morgan to form a committee to study supersonic transport. The group met in February 1954 and delivered their first report in April 1955. Robert T. Jones' work at NACA had demonstrated that the drag at supersonic speeds was strongly related to the span of the wing. This led to the use of short-span, thin trapezoidal wings such as those seen on the control surfaces of many missiles, or aircraft such as the Lockheed F-104 Starfighter interceptor or the planned Avro 730 strategic bomber that the team studied. The team outlined a baseline configuration that resembled an enlarged Avro 730. This short wingspan produced little lift at low speed, resulting in long take-off runs and high landing speeds. In an SST design, this would have required enormous engine power to lift off from existing runways and, to provide the fuel needed, "some horribly large aeroplanes" resulted. Based on this, the group considered the concept of an SST infeasible, and instead suggested continued low-level studies into supersonic aerodynamics. Slender deltas Soon after, Johanna Weber and Dietrich Küchemann at the RAE published a series of reports on a new wing planform, known in the UK as the "slender delta". The team, including Eric Maskell whose report "Flow Separation in Three Dimensions" contributed to an understanding of separated flow, worked with the fact that delta wings can produce strong vortices on their upper surfaces at high angles of attack. The vortex will lower the air pressure and cause lift. This had been noticed by Chuck Yeager in the Convair XF-92, but its qualities had not been fully appreciated. Weber suggested that the effect could be used to improve low speed performance. Küchemann's and Weber's papers changed the entire nature of supersonic design. The delta had already been used on aircraft, but these designs used planforms that were not much different from a swept wing of the same span. Weber noted that the lift from the vortex was increased by the length of the wing it had to operate over, which suggested that the effect would be maximised by extending the wing along the fuselage as far as possible. Such a layout would still have good supersonic performance, but also have reasonable take-off and landing speeds using vortex generation. The aircraft would have to take off and land very "nose high" to generate the required vortex lift, which led to questions about the low speed handling qualities of such a design. Küchemann presented the idea at a meeting where Morgan was also present. Test pilot Eric Brown recalls Morgan's reaction to the presentation, saying that he immediately seized on it as the solution to the SST problem. Brown considers this moment as being the birth of the Concorde project. Supersonic Transport Aircraft Committee On 1 October 1956 the Ministry of Supply asked Morgan to form a new study group, the Supersonic Transport Aircraft Committee (STAC) (sometimes referred to as the Supersonic Transport Advisory Committee), to develop a practical SST design and find industry partners to build it. At the first meeting, on 5 November 1956, the decision was made to fund the development of a test-bed aircraft to examine the low-speed performance of the slender delta, a contract that eventually produced the Handley Page HP.115. This aircraft demonstrated safe control at speeds as low as , about one third that of the F-104 Starfighter. STAC stated that an SST would have economic performance similar to existing subsonic types. Lift is not generated the same way at supersonic and subsonic speeds, with the lift-to-drag ratio for supersonic designs being about half that of subsonic designs. The aircraft would need more thrust than a subsonic design of the same size. But although they would use more fuel in cruise, they would be able to fly more revenue-earning flights in a given time, so fewer aircraft would be needed to service a particular route. This would remain economically advantageous as long as fuel represented a small percentage of operational costs. STAC suggested that two designs naturally fell out of their work, a transatlantic model flying at about Mach 2, and a shorter-range version flying at Mach 1.2. Morgan suggested that a 150-passenger transatlantic SST would cost about £75 to £90 million to develop, and be in service in 1970. The smaller 100-passenger short-range version would cost perhaps £50 to £80 million, and be ready for service in 1968. To meet this schedule, development would need to begin in 1960, with production contracts let in 1962. Morgan suggested that the US was already involved in a similar project, and that if the UK failed to respond it would be locked out of an airliner market that he believed would be dominated by SST aircraft. In 1959, a study contract was awarded to Hawker Siddeley and Bristol for preliminary designs based on the slender delta, which developed as the HSA.1000 and Bristol 198. Armstrong Whitworth also responded with an internal design, the M-Wing, for the lower-speed shorter-range category. Both the STAC group and the government were looking for partners to develop the designs. In September 1959, Hawker approached Lockheed, and after the creation of British Aircraft Corporation in 1960, the former Bristol team immediately started talks with Boeing, General Dynamics, Douglas Aircraft, and Sud Aviation. Ogee planform selected Küchemann and others at the RAE continued their work on the slender delta throughout this period, considering three basic shapes; the classic straight-edge delta, the "gothic delta" that was rounded outward to appear like a gothic arch, and the "ogival wing" that was compound-rounded into the shape of an ogee. Each of these planforms had advantages and disadvantages. As they worked with these shapes, a practical concern grew to become so important that it forced selection of one of these designs. Generally the wing's centre of pressure (CP, or "lift point") should be close to the aircraft's centre of gravity (CG, or "balance point") to reduce the amount of control force required to pitch the aircraft. As the aircraft layout changes during the design phase, it is common for the CG to move fore or aft. With a normal wing design this can be addressed by moving the wing slightly fore or aft to account for this. With a delta wing running most of the length of the fuselage, this was no longer easy; moving the wing would leave it in front of the nose or behind the tail. Studying the various layouts in terms of CG changes, both during design and changes due to fuel use during flight, the ogee planform immediately came to the fore. To test the new wing, NASA assisted the team by modifying a Douglas F5D Skylancer to mimic the wing selection. In 1965 the NASA test aircraft successfully tested the wing, and found that it reduced landing speeds noticeably over the standard delta wing. NASA also ran simulations at Ames that showed the aircraft would exhibit a sudden change in pitch when entering ground effect. Ames test pilots later participated in a joint cooperative test with the French and British test pilots and found that the simulations had been correct, and this information was added to pilot training. Partnership with Sud Aviation France had its own SST plans. In the late 1950s, the government requested designs from the government-owned Sud Aviation and Nord Aviation, as well as Dassault. All three returned designs based on Küchemann and Weber's slender delta; Nord suggested a ramjet powered design flying at Mach 3, and the other two were jet-powered Mach 2 designs that were similar to each other. Of the three, the Sud Aviation Super-Caravelle won the design contest with a medium-range design deliberately sized to avoid competition with transatlantic US designs they assumed were already on the drawing board. As soon as the design was complete, in April 1960, Pierre Satre, the company's technical director, was sent to Bristol to discuss a partnership. Bristol was surprised to find that the Sud team had designed a similar aircraft after considering the SST problem and coming to the same conclusions as the Bristol and STAC teams in terms of economics. It was later revealed that the original STAC report, marked "For UK Eyes Only", had secretly been passed to France to win political favour. Sud made minor changes to the paper and presented it as their own work. France had no modern large jet engines and had already decided to buy a British design (as they had on the earlier subsonic Caravelle). As neither company had experience in the use of heat-resistant metals for airframes, a maximum speed of around Mach 2 was selected so aluminium could be used – above this speed, the friction with the air heats the metal so much that it begins to soften. This lower speed would also speed development and allow their design to fly before the Americans. Everyone involved agreed that Küchemann's ogee-shaped wing was the right one. The British team was still focused on a 150-passenger design serving transatlantic routes, while France was deliberately avoiding these. Common components could be used in both designs, with the shorter range version using a clipped fuselage and four engines, and the longer one a stretched fuselage and six engines, leaving only the wing to be extensively re-designed. The teams continued to meet in 1961, and by this time it was clear that the two aircraft would be very similar in spite of different ranges and seating arrangements. A single design emerged that differed mainly in fuel load. More powerful Bristol Siddeley Olympus engines, being developed for the TSR-2, allowed either design to be powered by only four engines. Cabinet response, treaty While the development teams met, the French Minister of Public Works and Transport Robert Buron was meeting with the UK Minister of Aviation Peter Thorneycroft, and Thorneycroft told the cabinet that France was much more serious about a partnership than any of the US companies. The various US companies had proved uninterested, likely due to the belief that the government would be funding development and would frown on any partnership with a European company, and the risk of "giving away" US technological leadership to a European partner. When the STAC plans were presented to the UK cabinet, the economic considerations were considered highly questionable, especially as these were based on development costs, now estimated to be , which were repeatedly overrun in the industry. The Treasury Ministry presented a negative view, suggesting that there was no way the project would have any positive financial returns for the government, especially in light that "the industry's past record of over-optimistic estimating (including the recent history of the TSR.2) suggests that it would be prudent to consider" the cost "to turn out much too low." This led to an independent review of the project by the Committee on Civil Scientific Research and Development, which met on the topic between July and September 1962. The committee rejected the economic arguments, including considerations of supporting the industry made by Thorneycroft. Their report in October stated that it was unlikely there would be any direct positive economic outcome, but that the project should still be considered because everyone else was going supersonic, and they were concerned they would be locked out of future markets. It appeared the project would not be likely to significantly affect other, more important, research efforts. At the time, the UK was pressing for admission to the European Economic Community, and this became the main rationale for moving ahead with the aircraft. The development project was negotiated as an international treaty between the two countries rather than a commercial agreement between companies and included a clause, originally asked for by the UK government, imposing heavy penalties for cancellation. This treaty was signed on 29 November 1962. Charles de Gaulle vetoed the UK's entry into the European Community in a speech on 25 January 1963. Naming At Charles de Gaulle's January 1963 press conference the aircraft was first called 'Concorde'. The name was suggested by the eighteen-year-old son of F.G. Clark, the publicity manager at BAC's Filton plant. Reflecting the treaty between the British and French governments that led to Concorde's construction, the name Concorde is from the French word concorde (), which has an English equivalent, concord. Both words mean agreement, harmony, or union. The name was changed to Concord by Harold Macmillan in response to a perceived slight by de Gaulle. At the French roll-out in Toulouse in late 1967, the British Minister of Technology, Tony Benn, announced that he would change the spelling back to Concorde. This created a nationalist uproar that died down when Benn stated that the suffixed "e" represented "Excellence, England, Europe, and Entente (Cordiale)". In his memoirs, he recounted a letter from a Scotsman claiming, "you talk about 'E' for England, but part of it is made in Scotland." Given Scotland's contribution of providing the nose cone for the aircraft, Benn replied, "it was also 'E' for 'Écosse' (the French name for Scotland) – and I might have added 'e' for extravagance and 'e' for escalation as well!" In common usage in the United Kingdom, the type is known as "Concorde" without an article, rather than " Concorde" or " Concorde". Sales efforts Advertisements for Concorde during the late 1960s placed in publications such as Aviation Week & Space Technology predicted a market for 350 aircraft by 1980. The new consortium intended to produce one long-range and one short-range version, but prospective customers showed no interest in the short-range version, thus it was later dropped. Concorde's costs spiralled during development to more than six times the original projections, arriving at a unit cost of £23 million in 1977 (equivalent to £ million in ). Its sonic boom made travelling supersonically over land impossible without causing complaints from citizens. World events also dampened Concorde sales prospects; the 1973–74 stock market crash and the 1973 oil crisis had made airlines cautious about aircraft with high fuel consumption, and new wide-body aircraft, such as the Boeing 747, had recently made subsonic aircraft significantly more efficient and presented a low-risk option for airlines. While carrying a full load, Concorde achieved 15.8 passenger miles per gallon of fuel, while the Boeing 707 reached 33.3 pm/g, the Boeing 747 46.4 pm/g, and the McDonnell Douglas DC-10 53.6 pm/g. A trend in favour of cheaper airline tickets also caused airlines such as Qantas to question Concorde's market suitability. During the early 2000s, Flight International described Concorde as being "one of aerospace's most ambitious but commercially flawed projects", The consortium received orders (non-binding options) for more than 100 of the long-range version from the major airlines of the day: Pan Am, BOAC, and Air France were the launch customers, with six aircraft each. Other airlines in the order book included Panair do Brasil, Continental Airlines, Japan Airlines, Lufthansa, American Airlines, United Airlines, Air India, Air Canada, Braniff, Singapore Airlines, Iran Air, Olympic Airways, Qantas, CAAC Airlines, Middle East Airlines, and TWA. At the time of the first flight, the options list contained 74 options from 16 airlines: Testing The design work was supported by a research programme studying the flight characteristics of low ratio delta wings. A supersonic Fairey Delta 2 was modified to carry the ogee planform, and, renamed as the BAC 221, used for tests of the high-speed flight envelope; the Handley Page HP.115 also provided valuable information on low-speed performance. Construction of two prototypes began in February 1965: 001, built by Aérospatiale at Toulouse, and 002, by BAC at Filton, Bristol. 001 made its first test flight from Toulouse on 2 March 1969, piloted by André Turcat, and first went supersonic on 1 October. The first UK-built Concorde flew from Filton to RAF Fairford on 9 April 1969, piloted by Brian Trubshaw. Both prototypes were presented to the public on 7–8 June 1969 at the Paris Air Show. As the flight programme progressed, 001 embarked on a sales and demonstration tour on 4 September 1971, which was also the first transatlantic crossing of Concorde. Concorde 002 followed on 2 June 1972 with a tour of the Middle and Far East. Concorde 002 made the first visit to the United States in 1973, landing at Dallas/Fort Worth Regional Airport to mark the airport's opening. Concorde had initially held a great deal of customer interest, but the project was hit by order cancellations. The Paris Le Bourget air show crash of the competing Soviet Tupolev Tu-144 had shocked potential buyers, and public concern over the environmental issues of supersonic aircraftthe sonic boom, take-off noise and pollutionhad produced a change in the public opinion of SSTs. By 1976 the remaining buyers were from four countries: Britain, France, China, and Iran. Only Air France and British Airways (the successor to BOAC) took up their orders, with the two governments taking a cut of any profits. The US government cut federal funding for the Boeing 2707, its supersonic transport programme, in 1971; Boeing did not complete its two 2707 prototypes. The US, India, and Malaysia all ruled out Concorde supersonic flights over the noise concern, although some of these restrictions were later relaxed. Professor Douglas Ross characterised restrictions placed upon Concorde operations by President Jimmy Carter's administration as having been an act of protectionism of American aircraft manufacturers. Programme cost The original programme cost estimate was £70 million in 1962, (£ in ). After cost overruns and delays the programme eventually cost between £1.5 and £2.1 billion in 1976, (£ – in ). This cost was the main reason the production run was much smaller than expected. Design General features Concorde is an ogival delta winged aircraft with four Olympus engines based on those employed in the RAF's Avro Vulcan strategic bomber. It has an unusual tailless configuration for a commercial aircraft, as does the Tupolev Tu-144. Concorde was the first airliner to have a fly-by-wire flight-control system (in this case, analogue); the avionics system Concorde used was unique because it was the first commercial aircraft to employ hybrid circuits. The principal designer for the project was Pierre Satre, with Sir Archibald Russell as his deputy. Concorde pioneered the following technologies: For high speed and optimisation of flight: Double delta (ogee/ogival) shaped wings Variable engine air intake ramp system controlled by digital computers Supercruise capability For weight-saving and enhanced performance: Mach 2.02 (~) cruising speed for optimum fuel consumption (supersonic drag minimum and turbojet engines are more efficient at higher speed); fuel consumption at and at altitude of was . Mainly aluminium construction using a high-temperature alloy similar to that developed for aero-engine pistons. This material gave low weight and allowed conventional manufacture (higher speeds would have ruled out aluminium) Full-regime autopilot and autothrottle allowing "hands off" control of the aircraft from climb out to landing Fully electrically controlled analogue fly-by-wire flight controls systems High-pressure hydraulic system using for lighter hydraulic components. Air data computer (ADC) for the automated monitoring and transmission of aerodynamic measurements (total pressure, static pressure, angle of attack, side-slip). Fully electrically controlled analogue brake-by-wire system No auxiliary power unit, as Concorde would only visit large airports where ground air start carts were available. Powerplant A symposium titled "Supersonic-Transport Implications" was hosted by the Royal Aeronautical Society on 8 December 1960. Various views were put forward on the likely type of powerplant for a supersonic transport, such as podded or buried installation and turbojet or ducted-fan engines. Concorde needed to fly long distances to be economically viable; this required high efficiency from the powerplant. Turbofan engines were rejected due to their larger cross-section producing excessive drag (but would be studied for future SSTs). Olympus turbojet technology was already available for development to meet the design requirements. Rolls-Royce proposed developing the RB.169 to power Concorde during its initial design phase, but developing a wholly-new engine for a single aircraft would have been extremely costly, so the existing BSEL Olympus Mk 320 turbojet engine, which was already flying in the BAC TSR-2 supersonic strike bomber prototype, was chosen instead. Boundary layer management in the podded installation was put forward as simpler with only an inlet cone, however, Dr. Seddon of the RAE favoured a more integrated buried installation. One concern of placing two or more engines behind a single intake was that an intake failure could lead to a double or triple engine failure. While a ducted fan over the turbojet would reduce noise, its larger cross-section also incurred more drag. Acoustics specialists were confident that a turbojet's noise could be reduced and SNECMA made advances in silencer design during the programme. The Olympus Mk.622 with reduced jet velocity was proposed to reduce the noise but was not pursued. By 1974, the spade silencers which projected into the exhaust were reported to be ineffective but "entry-into-service aircraft are likely to meet their noise guarantees". The powerplant configuration selected for Concorde highlighted airfield noise, boundary layer management and interactions between adjacent engines and the requirement that the powerplant, at Mach 2, tolerate pushovers, sideslips, pull-ups and throttle slamming without surging. Extensive development testing with design changes and changes to intake and engine control laws addressed most of the issues except airfield noise and the interaction between adjacent powerplants at speeds above Mach 1.6 which meant Concorde "had to be certified aerodynamically as a twin-engined aircraft above Mach 1.6". Situated behind the wing leading edge, the engine intake had a wing boundary layer ahead of it. Two-thirds were diverted and the remaining third which entered the intake did not adversely affect the intake efficiency except during pushovers when the boundary layer thickened and caused surging. Wind tunnel testing helped define leading-edge modifications ahead of the intakes which solved the problem. Each engine had its own intake and the nacelles were paired with a splitter plate between them to minimise the chance of one powerplant influencing the other. Only above was an engine surge likely to affect the adjacent engine. The air intake design for Concorde's engines was especially critical. The intakes had to slow down supersonic inlet air to subsonic speeds with high-pressure recovery to ensure efficient operation at cruising speed while providing low distortion levels (to prevent engine surge) and maintaining high efficiency for all likely ambient temperatures in cruise. They had to provide adequate subsonic performance for diversion cruise and low engine-face distortion at take-off. They also had to provide an alternative path for excess intake of air during engine throttling or shutdowns. The variable intake features required to meet all these requirements consisted of front and rear ramps, a dump door, an auxiliary inlet and a ramp bleed to the exhaust nozzle. As well as supplying air to the engine, the intake also supplied air through the ramp bleed to the propelling nozzle. The nozzle ejector (or aerodynamic) design, with variable exit area and secondary flow from the intake, contributed to good expansion efficiency from take-off to cruise. Concorde's Air Intake Control Units (AICUs) made use of a digital processor for intake control. It was the first use of a digital processor with full authority control of an essential system in a passenger aircraft. It was developed by BAC's Electronics and Space Systems division after the analogue AICUs (developed by Ultra Electronics) fitted to the prototype aircraft were found to lack sufficient accuracy. Ultra Electronics also developed Concorde's thrust-by-wire engine control system. Engine failure causes problems on conventional subsonic aircraft; not only does the aircraft lose thrust on that side but the engine creates drag, causing the aircraft to yaw and bank in the direction of the failed engine. If this had happened to Concorde at supersonic speeds, it theoretically could have caused a catastrophic failure of the airframe. Although computer simulations predicted considerable problems, in practice Concorde could shut down both engines on the same side of the aircraft at Mach 2 without difficulties. During an engine failure the required air intake is virtually zero. So, on Concorde, engine failure was countered by the opening of the auxiliary spill door and the full extension of the ramps, which deflected the air downwards past the engine, gaining lift and minimising drag. Concorde pilots were routinely trained to handle double-engine failure. Concorde used reheat (afterburners) only at take-off and to pass through the transonic speed range, between Mach 0.95 and 1.7. Heating problems Kinetic heating from the high speed boundary layer caused the skin to heat up during supersonic flight. Every surface, such as windows and panels, was warm to the touch by the end of the flight. Apart from the engine bay, the hottest part of any supersonic aircraft's structure is the nose, due to aerodynamic heating. Hiduminium R.R. 58, an aluminium alloy, was used throughout the aircraft because it was relatively cheap and easy to work with. The highest temperature it could sustain over the life of the aircraft was , which limited the top speed to Mach 2.02. Concorde went through two cycles of cooling and heating during a flight, first cooling down as it gained altitude at subsonic speed, then heating up accelerating to cruise speed, finally cooling again when descending and slowing down before heating again in low altitude air before landing. This had to be factored into the metallurgical and fatigue modelling. A test rig was built that repeatedly heated up a full-size section of the wing, and then cooled it, and periodically samples of metal were taken for testing. The airframe was designed for a life of 45,000 flying hours. As the fuselage heated up it expanded by as much as . The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft that conducted a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, wedging the cap when the airframe shrank again. To keep the cabin cool, Concorde used the fuel as a heat sink for the heat from the air conditioning. The same method also cooled the hydraulics. During supersonic flight a visor was used to keep high temperature air from flowing over the cockpit skin. Concorde had livery restrictions; the majority of the surface had to be covered with a highly reflective white paint to avoid overheating the aluminium structure due to heating effects. The white finish reduced the skin temperature by . In 1996, Air France briefly painted F-BTSD in a predominantly blue livery, with the exception of the wings, in a promotional deal with Pepsi. In this paint scheme, Air France was advised to remain at for no more than 20 minutes at a time, but there was no restriction at speeds under Mach 1.7. F-BTSD was used because it was not scheduled for any long flights that required extended Mach 2 operations. Structural issues Due to its high speeds, large forces were applied to the aircraft during turns, causing distortion of the aircraft's structure. There were concerns over maintaining precise control at supersonic speeds. Both of these issues were resolved by ratio changes between the inboard and outboard elevon deflections, varying at differing speeds including supersonic. Only the innermost elevons, attached to the stiffest area of the wings, were used at higher speeds. The narrow fuselage flexed, which was apparent to rear passengers looking along the length of the cabin. When any aircraft passes the critical mach of its airframe, the centre of pressure shifts rearwards. This causes a pitch-down moment on the aircraft if the centre of gravity remains where it was. The wings were designed to reduce this, but there was still a shift of about . This could have been countered by the use of trim controls, but at such high speeds, this would have increased drag which would have been unacceptable. Instead, the distribution of fuel along the aircraft was shifted during acceleration and deceleration to move the centre of gravity, effectively acting as an auxiliary trim control. Range To fly non-stop across the Atlantic Ocean, Concorde required the greatest supersonic range of any aircraft. This was achieved by a combination of powerplants which were efficient at twice the speed of sound, a slender fuselage with high fineness ratio, and a complex wing shape for a high lift-to-drag ratio. Only a modest payload could be carried and the aircraft was trimmed without using deflected control surfaces, to avoid the drag that would incur. Nevertheless, soon after Concorde began flying, a Concorde "B" model was designed with slightly larger fuel capacity and slightly larger wings with leading edge slats to improve aerodynamic performance at all speeds, with the objective of expanding the range to reach markets in new regions. It would have higher thrust engines with noise reducing features and no environmentally-objectionable afterburner. Preliminary design studies showed that an engine with a 25% gain in efficiency over the Rolls-Royce/Snecma Olympus 593 could be produced. This would have given additional range and a greater payload, making new commercial routes possible. This was cancelled due in part to poor sales of Concorde, but also to the rising cost of aviation fuel in the 1970s. Radiation concerns Concorde's high cruising altitude meant people on board received almost twice the flux of extraterrestrial ionising radiation as those travelling on a conventional long-haul flight. Upon Concorde's introduction, it was speculated that this exposure during supersonic travels would increase the likelihood of skin cancer. Due to the proportionally reduced flight time, the overall equivalent dose would normally be less than a conventional flight over the same distance. Unusual solar activity might lead to an increase in incident radiation. To prevent incidents of excessive radiation exposure, the flight deck had a radiometer and an instrument to measure the rate of increase or decrease of radiation. If the radiation level became too high, Concorde would descend below . Cabin pressurisation Airliner cabins were usually maintained at a pressure equivalent to elevation. Concorde's pressurisation was set to an altitude at the lower end of this range, . Concorde's maximum cruising altitude was ; subsonic airliners typically cruise below . A sudden reduction in cabin pressure is hazardous to all passengers and crew. Above , a sudden cabin depressurisation would leave a "time of useful consciousness" up to 10–15 seconds for a conditioned athlete. At Concorde's altitude, the air density is very low; a breach of cabin integrity would result in a loss of pressure severe enough that the plastic emergency oxygen masks installed on other passenger jets would not be effective and passengers would soon suffer from hypoxia despite quickly donning them. Concorde was equipped with smaller windows to reduce the rate of loss in the event of a breach, a reserve air supply system to augment cabin air pressure, and a rapid descent procedure to bring the aircraft to a safe altitude. The FAA enforces minimum emergency descent rates for aircraft and noting Concorde's higher operating altitude, concluded that the best response to pressure loss would be a rapid descent. Continuous positive airway pressure would have delivered pressurised oxygen directly to the pilots through masks. Flight characteristics While subsonic commercial jets took eight hours to fly from Paris to New York (seven hours from New York to Paris), the average supersonic flight time on the transatlantic routes was just under 3.5 hours. Concorde had a maximum cruising altitude of and an average cruise speed of , more than twice the speed of conventional aircraft. With no other civil traffic operating at its cruising altitude of about , Concorde had exclusive use of dedicated oceanic airways, or "tracks", separate from the North Atlantic Tracks, the routes used by other aircraft to cross the Atlantic. Due to the significantly less variable nature of high altitude winds compared to those at standard cruising altitudes, these dedicated SST tracks had fixed co-ordinates, unlike the standard routes at lower altitudes, whose co-ordinates are replotted twice daily based on forecast weather patterns (jetstreams). Concorde would also be cleared in a block, allowing for a slow climb from during the oceanic crossing as the fuel load gradually decreased. In regular service, Concorde employed an efficient cruise-climb flight profile following take-off. The delta-shaped wings required Concorde to adopt a higher angle of attack at low speeds than conventional aircraft, but it allowed the formation of large low-pressure vortices over the entire upper wing surface, maintaining lift. The normal landing speed was . Because of this high angle, during a landing approach Concorde was on the backside of the drag force curve, where raising the nose would increase the rate of descent; the aircraft was thus largely flown on the throttle and was fitted with an autothrottle to reduce the pilot's workload. Brakes and undercarriage Because of the way Concorde's delta-wing generated lift, the undercarriage had to be unusually strong and tall to allow for the angle of attack at low speed. At rotation, Concorde would rise to a high angle of attack, about 18 degrees. Prior to rotation, the wing generated almost no lift, unlike typical aircraft wings. Combined with the high airspeed at rotation ( indicated airspeed), this increased the stresses on the main undercarriage in a way that was initially unexpected during the development and required a major redesign. Due to the high angle needed at rotation, a small set of wheels was added aft to prevent tailstrikes. The main undercarriage units swing towards each other to be stowed but due to their great height also needed to contract in length telescopically before swinging to clear each other when stowed. The four main wheel tyres on each bogie unit are inflated to . The twin-wheel nose undercarriage retracts forwards and its tyres are inflated to a pressure of , and the wheel assembly carries a spray deflector to prevent standing water from being thrown up into the engine intakes. The tyres are rated to a maximum speed on the runway of . The high take-off speed of required Concorde to have upgraded brakes. Like most airliners, Concorde has anti-skid braking  to prevent the tyres from losing traction when the brakes are applied. The brakes, developed by Dunlop, were the first carbon-based brakes used on an airliner. The use of carbon over equivalent steel brakes provided a weight-saving of . Each wheel has multiple discs which are cooled by electric fans. Wheel sensors include brake overload, brake temperature, and tyre deflation. After a typical landing at Heathrow, brake temperatures were around . Landing Concorde required a minimum of runway length; the shortest runway Concorde ever landed on carrying commercial passengers was Cardiff Airport. Concorde G-AXDN (101) made its final landing at Duxford Aerodrome on 20 August 1977, which had a runway length of just at the time. This was the last aircraft to land at Duxford before the runway was shortened later that year. Droop nose Concorde's drooping nose, developed by Marshall's of Cambridge, enabled the aircraft to switch from being streamlined to reduce drag and achieve optimal aerodynamic efficiency during flight, to not obstructing the pilot's view during taxi, take-off, and landing operations. Due to the high angle of attack, the long pointed nose obstructed the view and necessitated the ability to droop. The droop nose was accompanied by a moving visor that retracted into the nose prior to being lowered. When the nose was raised to horizontal, the visor would rise in front of the cockpit windscreen for aerodynamic streamlining. A controller in the cockpit allowed the visor to be retracted and the nose to be lowered to 5° below the standard horizontal position for taxiing and take-off. Following take-off and after clearing the airport, the nose and visor were raised. Prior to landing, the visor was again retracted and the nose lowered to 12.5° below horizontal for maximal visibility. Upon landing the nose was raised to the 5° position to avoid the possibility of damage due to collision with ground vehicles, and then raised fully before engine shutdown to prevent pooling of internal condensation within the radome seeping down into the aircraft's pitot/ADC system probes. The US Federal Aviation Administration had objected to the restrictive visibility of the visor used on the first two prototype Concordes, which had been designed before a suitable high-temperature window glass had become available, and thus requiring alteration before the FAA would permit Concorde to serve US airports. This led to the redesigned visor used in the production and the four pre-production aircraft (101, 102, 201, and 202). The nose window and visor glass, needed to endure temperatures in excess of at supersonic flight, were developed by Triplex. Operational history Concorde began scheduled flights with British Airways (BA) and Air France (AF) on 21 January 1976. AF flew its last commercial flight on 30 May 2003 with BA retiring its Concorde fleet on 24 October 2003. Operators Air France British Airways Braniff International Airways operated Concordes at subsonic speed between Dulles International Airport and Dallas Fort Worth International Airport, from January 1979 until May 1980, utilizing its own flight and cabin crew, under its own insurance and operator's license. Stickers containing a US registration were placed over the French and British registrations of the aircraft during each rotation, and a placard was temporarily placed behind the cockpit to signify the operator and operator's license in command. Singapore Airlines had its livery placed on the left side of Concorde G-BOAD, and held a joint marketing agreement which saw Singapore insignias on the cabin fittings, as well as the airline's "Singapore Girl" stewardesses jointly sharing cabin duty with British Airways flight attendants. All flight crew, operations, and insurances remained solely under British Airways however, and at no point did Singapore Airlines operate Concorde services under its own operator's certification, nor wet-lease an aircraft. This arrangement initially only lasted for three flights, conducted between 9–13 December 1977; it later resumed on 24 January 1979, and operated until 1 November 1980. The Singapore livery was used on G-BOAD from 1977 to 1980. Accidents and incidents Air France Flight 4590 On 25 July 2000, Air France Flight 4590, registration F-BTSC, crashed in Gonesse, France, after departing from Charles de Gaulle Airport en route to John F. Kennedy International Airport in New York City, killing all 100 passengers and nine crew members on board as well as four people on the ground. It was the only fatal accident involving Concorde. This crash also damaged Concorde's reputation and caused both British Airways and Air France to temporarily ground their fleets. According to the official investigation conducted by the Bureau of Enquiry and Analysis for Civil Aviation Safety (BEA), the crash was caused by a metallic strip that had fallen from a Continental Airlines DC-10 that had taken off minutes earlier. This fragment punctured a tyre on Concorde's left main wheel bogie during take-off. The tyre exploded, and a piece of rubber hit the fuel tank, which caused a fuel leak and led to a fire. The crew shut down engine number 2 in response to a fire warning, and with engine number 1 surging and producing little power, the aircraft was unable to gain altitude or speed. The aircraft entered a rapid pitch-up then a sudden descent, rolling left and crashing tail-low into the Hôtelissimo Les Relais Bleus Hotel in Gonesse. Before the accident, Concorde had been arguably the safest operational passenger airliner in the world with zero passenger deaths, but there had been two prior non-fatal accidents and a rate of tyre damage 30 times higher than subsonic airliners from 1995 to 2000. Safety improvements made after the crash included more secure electrical controls, Kevlar lining on the fuel tanks and specially developed burst-resistant tyres. The first flight with the modifications departed from London Heathrow on 17 July 2001, piloted by BA Chief Concorde Pilot Mike Bannister. In a flight of 3 hours 20 minutes over the mid-Atlantic towards Iceland, Bannister attained Mach 2.02 and then returned to RAF Brize Norton. The test flight, intended to resemble the London–New York route, was declared a success and was watched on live TV, and by crowds on the ground at both locations. The first flight with passengers after the 2000 grounding landed shortly before the World Trade Center attacks in the United States. This was not a commercial flight: all the passengers were BA employees. Normal commercial operations resumed on 7 November 2001 by BA and AF (aircraft G-BOAE and F-BTSD), with service to New York JFK, where Mayor Rudy Giuliani greeted the passengers. Other accidents and incidents On 12 April 1989, Concorde G-BOAF, on a chartered flight from Christchurch, New Zealand, to Sydney, Australia, suffered a structural failure at supersonic speed. As the aircraft was climbing and accelerating through Mach 1.7, a "thud" was heard. The crew did not notice any handling problems, and they assumed the thud they heard was a minor engine surge. No further difficulty was encountered until descent through at Mach 1.3, when a vibration was felt throughout the aircraft, lasting two to three minutes. Most of the upper rudder had separated from the aircraft at this point. Aircraft handling was unaffected, and the aircraft made a safe landing at Sydney. The UK's Air Accidents Investigation Branch (AAIB) concluded that the skin of the rudder had been separating from the rudder structure over a period before the accident due to moisture seepage past the rivets in the rudder. Production staff had not followed proper procedures during an earlier modification of the rudder; the procedures were difficult to adhere to. The aircraft was repaired and returned to service. On 21 March 1992, G-BOAB while flying British Airways Flight 001 from London to New York, also suffered a structural failure at supersonic speed. While cruising at Mach 2, at approximately , the crew heard a "thump". No difficulties in handling were noticed, and no instruments gave any irregular indications. This crew also suspected there had been a minor engine surge. One hour later, during descent and while decelerating below Mach 1.4, a sudden "severe" vibration began throughout the aircraft. The vibration worsened when power was added to the No 2 engine. The crew shut down the No 2 engine and made a successful landing in New York, noting that increased rudder control was needed to keep the aircraft on its intended approach course. Again, the skin had separated from the structure of the rudder, which led to most of the upper rudder detaching in flight. The AAIB concluded that repair materials had leaked into the structure of the rudder during a recent repair, weakening the bond between the skin and the structure of the rudder, leading to it breaking up in flight. The large size of the repair had made it difficult to keep repair materials out of the structure, and prior to this accident, the severity of the effect of these repair materials on the structure and skin of the rudder was not appreciated. The 2010 trial involving Continental Airlines over the crash of Flight 4590 established that from 1976 until Flight 4590 there had been 57 tyre failures involving Concordes during takeoffs, including a near-crash at Dulles International Airport on 14 June 1979 involving Air France Flight 54 where a tyre blowout pierced the plane's fuel tank and damaged a left engine and electrical cables, with the loss of two of the craft's hydraulic systems. Aircraft on display Twenty Concorde aircraft were built: two prototypes, two pre-production aircraft, two development aircraft and 14 production aircraft for commercial service. With the exception of two of the production aircraft, all are preserved, mostly in museums. One aircraft was scrapped in 1994, and another was destroyed in the Air France Flight 4590 crash in 2000. Comparable aircraft Tu-144 Concorde was one of only two supersonic jetliner models to operate commercially; the other was the Soviet-built Tupolev Tu-144, which operated in the late 1970s. The Tu-144 was nicknamed "Concordski" by Western European journalists for its outward similarity to Concorde. Soviet espionage efforts allegedly stole Concorde blueprints to assist in the design of the Tu-144. As a result of a rushed development programme, the first Tu-144 prototype was substantially different from the preproduction machines, but both were cruder than Concorde. The Tu-144S had a significantly shorter range than Concorde. Jean Rech, Sud Aviation, attributed this to two things, a very heavy powerplant with an intake twice as long as that on Concorde, and low-bypass turbofan engines with too high a bypass ratio which needed afterburning for cruise. The aircraft had poor control at low speeds because of a simpler wing design. The Tu-144 required braking parachutes to land. The Tu-144 had two crashes, one at the 1973 Paris Air Show, and another during a pre-delivery test flight in May 1978. Passenger service commenced in November 1977, but after the 1978 crash the aircraft was taken out of passenger service after only 55 flights, which carried an average of 58 passengers. The Tu-144 had an inherently unsafe structural design as a consequence of an automated production method chosen to simplify and speed up manufacturing. The Tu-144 program was cancelled by the Soviet government on 1 July 1983. SST and others The main competing designs for the US government-funded supersonic transport (SST) were the swing-wing Boeing 2707 and the compound delta wing Lockheed L-2000. These were to have been larger, with seating for up to 300 people. The Boeing 2707 was selected for development. Concorde first flew in 1969, the year Boeing began building 2707 mockups after changing the design to a cropped delta wing; the cost of this and other changes helped to kill the project. The operation of US military aircraft such as the Mach 3+ North American XB-70 Valkyrie prototypes and Convair B-58 Hustler strategic nuclear bomber had shown that sonic booms were capable of reaching the ground, and the experience from the Oklahoma City sonic boom tests led to the same environmental concerns that hindered the commercial success of Concorde. The American government cancelled its SST project in 1971 having spent more than $1 billion without any aircraft being built. Impact Environmental Before Concorde's flight trials, developments in the civil aviation industry were largely accepted by governments and their respective electorates. Opposition to Concorde's noise, particularly on the east coast of the United States, forged a new political agenda on both sides of the Atlantic, with scientists and technology experts across a multitude of industries beginning to take the environmental and social impact more seriously. Although Concorde led directly to the introduction of a general noise abatement programme for aircraft flying out of John F. Kennedy Airport, many found that Concorde was quieter than expected, partly due to the pilots temporarily throttling back their engines to reduce noise during overflight of residential areas. Even before commercial flights started, it had been claimed that Concorde was quieter than many other aircraft. In 1971, BAC's technical director stated, "It is certain on present evidence and calculations that in the airport context, production Concordes will be no worse than aircraft now in service and will in fact be better than many of them." Concorde produced nitrogen oxides in its exhaust, which, despite complicated interactions with other ozone-depleting chemicals, are understood to result in degradation to the ozone layer at the stratospheric altitudes it cruised. It has been pointed out that other, lower-flying, airliners produce ozone during their flights in the troposphere, but vertical transit of gases between the layers is restricted. The small fleet meant overall ozone-layer degradation caused by Concorde was negligible. In 1995, David Fahey, of the National Oceanic and Atmospheric Administration in the United States, warned that a fleet of 500 supersonic aircraft with exhausts similar to Concorde might produce a 2 per cent drop in global ozone levels, much higher than previously thought. Each 1 per cent drop in ozone is estimated to increase the incidence of non-melanoma skin cancer worldwide by 2 per cent. Dr Fahey said if these particles are produced by highly oxidised sulphur in the fuel, as he believed, then removing sulphur in the fuel will reduce the ozone-destroying impact of supersonic transport. Concorde's technical leap forward boosted the public's understanding of conflicts between technology and the environment as well as awareness of the complex decision analysis processes that surround such conflicts. In France, the use of acoustic fencing alongside TGV tracks might not have been achieved without the 1970s controversy over aircraft noise. In the UK, the CPRE has issued tranquillity maps since 1990. Public perception Concorde was normally perceived as a privilege of the rich, but special circular or one-way (with return by other flight or ship) charter flights were arranged to bring a trip within the means of moderately well-off enthusiasts. As a symbol of national pride, an example from the BA fleet made occasional flypasts at selected Royal events, major air shows and other special occasions, sometimes in formation with the Red Arrows. On the final day of commercial service, public interest was so great that grandstands were erected at Heathrow Airport. Significant numbers of people attended the final landings; the event received widespread media coverage. The aircraft was usually referred to by the British as simply "Concorde". In France it was known as "le Concorde" due to "le", the definite article, used in French grammar to introduce the name of a ship or aircraft, and the capital being used to distinguish a proper name from a common noun of the same spelling. In French, the common noun concorde means "agreement, harmony, or peace". Concorde's pilots and British Airways in official publications often refer to Concorde both in the singular and plural as "she" or "her". In 2006, 37 years after its first test flight, Concorde was announced the winner of the Great British Design Quest organised by the BBC (through The Culture Show) and the Design Museum. A total of 212,000 votes were cast with Concorde beating other British design icons such as the Mini, mini skirt, Jaguar E-Type car, the Tube map, the World Wide Web, the K2 red telephone box and the Supermarine Spitfire. Special missions The heads of France and the United Kingdom flew in Concorde many times. Presidents Georges Pompidou, Valéry Giscard d'Estaing and François Mitterrand regularly used Concorde as French flagship aircraft on foreign visits. Elizabeth II and Prime Ministers Edward Heath, Jim Callaghan, Margaret Thatcher, John Major and Tony Blair took Concorde in some charter flights such as the Queen's trips to Barbados on her Silver Jubilee in 1977, in 1987 and in 2003, to the Middle East in 1984 and to the United States in 1991. Pope John Paul II flew on Concorde in May 1989. Concorde sometimes made special flights for demonstrations, air shows (such as the Farnborough, Paris-Le Bourget, Oshkosh AirVenture and MAKS air shows) as well as parades and celebrations (for example, of Zurich Airport's anniversary in 1998). The aircraft were also used for private charters (including by the President of Zaire Mobutu Sese Seko on multiple occasions), for advertising companies (including for the firm OKI), for Olympic torch relays (1992 Winter Olympics in Albertville) and for observing solar eclipses, including the solar eclipse of 30 June 1973 and again for the total solar eclipse on 11 August 1999. Records The fastest transatlantic airliner flight was from New York JFK to London Heathrow on 7 February 1996 by the British Airways G-BOAD in 2 hours, 52 minutes, 59 seconds from take-off to touchdown aided by a 175 mph (282 km/h) tailwind. On 13 February 1985, a Concorde charter flight flew from London Heathrow to Sydney in a time of 17 hours, 3 minutes and 45 seconds, including refuelling stops. Concorde set the FAI "Westbound Around the World" and "Eastbound Around the World" world air speed records. On 12–13 October 1992, in commemoration of the 500th anniversary of Columbus' first voyage to the New World, Concorde Spirit Tours (US) chartered Air France Concorde F-BTSD and circumnavigated the world in 32 hours 49 minutes and 3 seconds, from Lisbon, Portugal, including six refuelling stops at Santo Domingo, Acapulco, Honolulu, Guam, Bangkok, and Bahrain. The eastbound record was set by the same Air France Concorde (F-BTSD) under charter to Concorde Spirit Tours in the US on 15–16 August 1995. This promotional flight circumnavigated the world from New York/JFK International Airport in 31 hours 27 minutes 49 seconds, including six refuelling stops at Toulouse, Dubai, Bangkok, Andersen AFB in Guam, Honolulu, and Acapulco. On its way to the Museum of Flight in November 2003, G-BOAG set a New York City-to-Seattle speed record of 3 hours, 55 minutes, and 12 seconds. Due to the restrictions on supersonic overflights within the US the flight was granted permission by the Canadian authorities for the majority of the journey to be flown supersonically over sparsely-populated Canadian territory. Specifications Notable appearances in media See also Barbara Harmer, the first qualified female Concorde pilot Museo del Concorde, a former museum in Mexico dedicated to the airliner Notes References Citations Bibliography . External links Legacy British Airways Concorde page BAC Concorde at BAE Systems site Design Museum (UK) Concorde page Heritage Concorde preservation group site Articles Videos "Video: Roll-out." British Movietone/Associated Press. 14 December 1967, posted online on 21 July 2015. "This plane could cross the Atlantic in 3.5 hours. Why did it fail?." Vox Media. 19 July 2016. Air France–KLM British Airways British Aircraft Corporation aircraft Tailless delta-wing aircraft France–United Kingdom relations 1960s international airliners Quadjets Supersonic transports History of science and technology in the United Kingdom Aircraft first flown in 1969 Aircraft with retractable tricycle landing gear
Concorde
Physics
11,858
69,576,755
https://en.wikipedia.org/wiki/Zone%20theorem
In geometry, the zone theorem is a result that establishes the complexity of the zone of a line in an arrangement of lines. Definition A line arrangement, denoted as , is a subdivision of the plane, induced by a set of lines , into cells (-dimensional faces), edges (-dimensional faces) and vertices (-dimensional faces). Given a set of lines , the line arrangement , and a line (not belonging to ), the zone of is the set of faces intersected by . The complexity of a zone is the total number of edges in its boundary, expressed as a function of . The zone theorem states that said complexity is . History This result was published for the first time in 1985; Chazelle et al. gave the upper bound of for the complexity of the zone of a line in an arrangement. In 1991, this bound was improved to , and it was also shown that this is the best possible upper bound up to a small additive factor. Then, in 2011, Rom Pinchasi proved that the complexity of the zone of a line in an arrangement is at most , and this is a tight bound. Some paradigms used in the different proofs of the theorem are induction, sweep technique, tree construction, and Davenport-Schinzel sequences. Generalizations Although the most popular version is for arrangements of lines in the plane, there exist some generalizations of the zone theorem. For instance, in dimension , considering arrangements of hyperplanes, the complexity of the zone of a hyperplane is the number of facets ( - dimensional faces) bounding the set of cells (-dimensional faces) intersected by . Analogously, the -dimensional zone theorem states that the complexity of the zone of a hyperplane is . There are considerably fewer proofs for the theorem for dimension . For the -dimensional case, there are proofs based on sweep techniques and for higher dimensions is used Euler’s relation: Another generalization is considering arrangements of pseudolines (and pseudohyperplanes in dimension ) instead of lines (and hyperplanes). Some proofs for the theorem work well in this case since they do not use the straightness of the lines substantially through their arguments. Motivation The primary motivation to study the zone complexity in arrangements arises from looking for efficient algorithms to construct arrangements. A classical algorithm is the incremental construction, which can be roughly described as adding the lines one after the other and storing all faces generated by each in an appropriate data structure (the usual structure for arrangements is the doubly connected edge list (DCEL)). Here, the consequence of the zone theorem is that the entire construction of any arrangement of lines can be done in time , since the insertion of each line takes time . Notes References . . . . . . . . . . Euclidean plane geometry Theorems in plane geometry
Zone theorem
Mathematics
573
67,371,587
https://en.wikipedia.org/wiki/Kirkhill%20Astronomical%20Pillar
The Kirkhill Astronomical Pillar was constructed in 1776 by David Stewart Erskine, 11th Earl of Buchan and erected in the grounds of his estate at Kirkhill House, near Broxburn, Scotland. The pillar fell into disrepair and eventually collapsed in the 1970s but fortunately the stones were preserved and the pillar was reconstructed (1988) in Almondell Country Park on land once owned by the Erskine family. The pillar records the details of an adjacent scale model of the Solar System constructed by Erskine following the measurements of the size of the Solar System deduced from the observations of the Transits of Venus in 1761 and 1769. The model, centred on a Sun of stone six feet in diameter with planets at distances and sizes to scale, has long since disappeared; only the pillar remains. Erskine and science As a young child Erskine was taught at home by his parents, both of whom had studied (and met each other) in the classes of the famous mathematician Colin Maclaurin at Edinburgh University. They also employed a private tutor, James Buchanan, a graduate of Glasgow university, well versed in mathematics and languages. Under the guidance of this trio he developed a life interest in mathematics and astronomy. At the age of 13, Erskine entered St. Andrews University (1755–1759) and then continued to Edinburgh University (1760–1762) and finally Glasgow University (1762–63). Although Erskine's later intellectual activities were dominated by his investigation of Scottish antiquities, he remained interested in science and mathematics. He was honoured by election to the Royal Society of London in 1765. At that time he was living in London and at meetings of the society he would have heard much of the following topical astronomical problem. How far is the Sun? By the beginning of the eighteenth century the Copernican model of a heliocentric Solar System was well established and astronomers such as Tycho Brahe and Johannes Kepler were able to describe the motions of the planets with ever greater precision. However, no one knew the absolute size in miles (or any other units) of the Solar System although the solar distances of the planets could all be expressed as definite ratios of the Earth-Sun distance by using Kepler's laws. This fundamental distance is termed the Astronomical Unit (AU). The breakthrough came in 1639 when Jeremiah Horrocks made the first scientific observation of a transit of Venus and used his results to estimate an approximation for the AU. A second method, proposed in 1663 by the Scottish mathematician James Gregory, was promoted by Edmond Halley in a paper published in 1691 (revised 1716). He demonstrated how the AU could be measured very accurately by comparing the duration of the Venus transit across the face of the Sun as measured by two observers spaced at latitudes a few thousand kilometres apart. The next opportunities of observing such a transit were in 1761 and 1769 but Halley had died in 1742 and it was left to others to organise observations in the first ever major international scientific collaboration. The event of 1761 produced sparse results because travel overseas was greatly hindered by the Seven Years' War but in 1769 many observers were again despatched all over the world, amongst them being Captain James Cook on behalf of the Royal Society of London. Various pairs of observation results were input into Halley's calculations giving many slightly different values and a mean value of the AU was published shortly afterwards in the Philosophical Transactions of the Royal Society. The result was 93,726,900 miles, within one per cent of the presently accepted value is 92,955,807 miles. In Scotland, both transits were observed by Erskine's friend and neighbour, Reverend Alexander Bryce, minister of the church at Kirknewton, only 3 miles from Kirkhill. Bryce was a competent mathematician and he calculated the AU and the other distance parameters of the Solar System: it is these values that Erskine used to create his scale model of the Solar System. The epitome In his 'Account of the Parish of Uphall', Erskine writes: The scale appears unusual but it followed simply from Bryce's calculation of the diameter of the Sun as 884,396 miles and Erskine's arbitrary choice of a representation of the Sun by a freestone spheroid 6 feet, or 72 inches, in diameter. Dividing 884,396 by 72 gives 12,283.28 miles to one inch, or 778,268,621:1. Of the six planets known in the eighteenth century Jupiter and Saturn were modelled in stone, the latter having an iron band, and the smaller planets were made of bronze: all were mounted on plinths or pillars in the grounds of the Kirkhill estate at the correct scaled distance from the Sun. Primrose, writing in 1898, says that only a few of the plinths remained in his day. The table giving the dimensions of his representation is carved into the east face of the stone pillar, or belfry; it is barely legible now, but the details are preserved in the Uphall account. Planet diameters and distances on the pillar are reproduced here, along with the values obtained by scaling inches up to miles, by a factor of 12,283.28. Modern values are shown for comparison. Details for the moons of Jupiter and Saturn have been omitted. Calculation of the values in the table starts from the new value of the AU calculated by Bryce. Kepler's Laws then give the solar distance (in miles) for every planet and therefore, given the actual dimensions of the orbits, it is straightforward to calculate the distance of any planet from Earth at the time of any observation. Then, using the observed angular sizes of the Sun and the planets he could deduce their diameters in miles. To fit the data on the table Bryce must have calculated the value for the AU to be 95,072,587 miles. This value is greater than the modern (average) value of 93,000,000 miles. This largely accounts for the discrepancies in Erskine's data for distances and diameters. The third, fourth and fifth columns of the pillar are reproduced in a second table below. It shows that the eccentricities of the planets and their inclinations to the ecliptic were quite well known at the time. (In the table Erskine's eccentricity value 80)387( is simply the fraction 80/387 and this has been replaced by decimal 0.207 etc.). Eccentricity and inclination are the essential parameters for working out the motions of the planets. No values are given for the orbit inclinations to the ecliptic for Mars and Jupiter, the space on the table having been utilised for a comment on the moons of Jupiter. The last pair of columns refer to what Erskine terms the inclination; the planet rotation axis to the plane of the orbit. Nowadays the term axial tilt is used by astronomers: it defines the angle between the rotation axis and the normal to the plane of the orbit and it is equal to 90 degrees minus Erskine's inclination. The values for Mercury and Venus are omitted on the pillar. The final column on the pillar is a prediction of where the planets will be on May 20, 2255. The heliocentric places within the zodiac constellations define an angle now termed the heliocentric ecliptic longitude. Both are measured from the point in the sky where Aries begins. Each constellation covers 30 degrees whereas the longitude covers the whole 360 degrees spanned by all 12 constellations. The order of zodiac constellations is Aries, Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpio, Sagittarius, Capricorn, Aquarius, and Pisces. Therefore 9°40′ in Sagittarius for Mercury becomes a (decimal) longitude of 249.667° etc. The significance for the year 2255 specified in the prediction is that it is a year in which a transit of Venus occurs; the eighth after that of 1769. During such a transit the Earth, Venus and the Sun must be closely aligned, in other words the heliocentric places (longitude) of the planets must be very close, as shown by the predictions for the actual transit on 9 June 2255. Therefore, since Erskine gives heliocentric places for Venus and Earth differing by about 35°, he was clearly not predicting a transit for 20 May. There is no astronomical phenomenon associated with that day but it must have had some significance for Erskine, as yet unexplained. Other inscriptions on the pillar There are inscriptions on the four sides of the pillar but they are now difficult to read. Fortunately some are recorded in Erskine's history of Uphall and others in the account of the same parish by James Primrose. Most are in Latin, often abbreviated, but translations have been given by James Primrose in his chapter on Kirkhill. East Face This face has the table described in the previous section. Above the table is the quotation given at the beginning of the previous section where Erskine (Buchan) describes his construction and its scale. West Face An inscription in Latin: Jacobo Buchanano, Matheseos P. Glasg. Adolefcentiae meae Custod. incorruptissicno has Amoenitates Academicas Manibus propriis dedicavi, inscripsi, sacraque esse volui. Anno ab ejus excessu XV. et a Christo natu MDCCLXXVII. Ille ego qui quondam patriae perculsus amore, Civibus oppressis, libertati succurrere aussim, Nunc Arva paterna colo, fugioque liruina regum. Primrose gives the translation: To James Buchanan, Professor of Mathematics at Glasgow, the most incorruptible guardian of my youth, have I dedicated, inscribed with my own hands, these Academic Amenities, and I wish them to be sacred. On the 15th year of his death and from the birth of Christ 1771, I who formerly animated by love of country, dared to succour liberty and oppressed citizens, now cultivate my paternal fields and shun the threshold of Kings. James Buchanan was the tutor and mentor of Erskine's early years. He died in 1761. South Face A quotation from Vergil's Georgics: which may be translated as "Pay homage to the heavenly sent land" or "The worthy glory of the Divine Country is abiding" Underneath the inscription is a large bow and arrow the significance of which is unknown, the sign for Scorpius, and an unidentified sign. North Face A long inscription gives abbreviated details of the location of the pillar and other points. Erskine gives a fuller version in his account of Uphall Parish. "The latitude of Kirkhill is 55°56'17" north, the west longitude in time from Greenwich Observatory is 13′ 59′′10′′′. The variation of the compass 1778 in June was 22°, the dip of the north end of the needle at the same time was 71°33'. The elevation above high water mark at Lieth (sic) when there is 12 feet of water in the harbour 273 feet; it is lower than the top of Arthurs Seat, 546 feet, lower than the Observatory on Calton Hill 83, than the top of the Castle Rock 290. West longitude in time from Edinburgh Observatory, 1°8"; east longitude in time from Glasgow Observatory, 3′11′′50′′′ - distance from Kirknewton Manse in Midlothian, 20,108 feet; north from Kirknewton Manse, 17,005 feet or 2′47′′ (arc); west from Kirknewton Manse, 10,680 feet or 12′′30′′′ in time." The mention of Kirknewton Manse links this inscription to its resident, Alexander Bryce, who provided the details of the epitome table. The latitude is in a conventional notation but the longitudes are defined in terms of time: 15 degrees of longitude corresponding to one hour. The Greenwich time separation from Kirkhill given as 13′ 59′′ 10′′′ (minutes, seconds, sixtieths) corresponds to longitude 3.496°W: the modern value is 3.46°W. Similarly time displacements of the observatories at Edinburgh and Glasgow should be read as 1′8′′ (not 1°8") and 3′11′′50′′′ respectively, corresponding to 17 and 48 arc minutes of longitude, or 11 and 31 miles. The distances from Kirknewton Manse to the pillar are direct, north and west: the latitude difference is 2′47″ (arc) and the longitude difference in time is 12′′30′′′ corresponding to 3.12 arc minutes of longitude. The height differences between the pillar and locations in Edinburgh are an interesting by-product of Bryce's survey of a canal from the city, past Kirkhill and on to Falkirk. Since there were to be no locks between the city and Broxburn the height of the pillar was easily related to that of the canal terminus and hence other known Edinburgh locations. Other inscriptions There are a number of other inscriptions which were close to the pillar. The globe representing the sun was engraved, in large Hebrew letters, with the question "What is man?" A plinth showing the Moon orbiting the Earth was inscribed "Newtono Magno". A small building near the pillar was inscribed "Keplero Felici". The approach to Kirkhill was guarded by pillars inscribed "Libertate quietate". On a triangular equilateral stone in Erskine's garden, was the inscription, "Great are thy works, Jehovah, infinite thy power!" The model re-imagined In the years leading up to the 2012 transit a group of Scottish artists collaborated on an artistic realisation of the Solar System model of Erskine. The Kirkhill Pillar Project was commissioned under the auspices of Artlink Edinburgh. The Sun is represented by a light box on the top of Broxburn academy, within a few hundred metres of the Erskine's own house. The artefacts representing the nine planets are distributed around the county of West Lothian at distances given by Erskine's scale. Mars and Jupiter are represented by small spheres mounted on plinths. Mercury is represented by a cast iron replica of the cratered surface of the predominantly iron planet. Venus is represented by a schematic version of its transit over the face of the Sun. Earth, inspired by the blue and white image seen on early space missions, is represented by two planters containing blue and white flowers. Mars is a distinctive red sculpture in community woodland. A cast acrylic clear block houses a painted model of the planet Jupiter. Saturn is represented by a technical image used by James Clerk Maxwell in his explanation of the structure and stability of the rings. Uranus is represented by a band suspended from two trees: it houses seven opaque apertures which allow the light to shine through. Neptune is captured as a blue orb in a lantern above the doors of Kingscavil church. Pluto is carved into black polished granite placed in Beecraigs Country Park. Images, further details and a map of locations may be found on the website of the Kirkhill project. See also Solar System models References Notes Citations Sources External links Pillar, Kirkhill and Erskine (Buchan) Photographs at https://holeousia.com/ Solar System Solar System models
Kirkhill Astronomical Pillar
Astronomy
3,191
10,451,618
https://en.wikipedia.org/wiki/Integrated%20catchment%20management
Integrated catchment management (ICM) is a subset of environmental planning which approaches sustainable resource management from a catchment perspective, in contrast to a piecemeal approach that artificially separates land management from water management. Details Integrated catchment management recognizes the existence of ecosystems and their role in supporting flora and fauna, providing services to human societies, and regulating the human environment. Integrated catchment management seeks to take into account complex relationships within those ecosystems: between flora and fauna, between geology, between soils and the biosphere, and between the biosphere and the atmosphere. Integrated catchment management recognizes the cyclic nature of processes within an ecosystem, and values scientific and technical information for understanding and analysing the natural world. See also Catchment Management Authority (New South Wales) Catchment Management Authority (Victoria) Motueka River List of drainage basins by area References External links Landcare Research - Integrated Catchment Management ABC catchment fact sheet Natural resource management Urban planning Hydrology Rivers
Integrated catchment management
Chemistry,Engineering,Environmental_science
188
3,352,292
https://en.wikipedia.org/wiki/Wavelet%20transform
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform. Definition A function is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is, a complete orthonormal system for the Hilbert space of square-integrable functions on the real line. The Hilbert basis is constructed as the family of functions by means of dyadic translations and dilations of , for integers . If, under the standard inner product on , this family is orthonormal, then it is an orthonormal system: where is the Kronecker delta. Completeness is satisfied if every function may be expanded in the basis as with convergence of the series understood to be convergence in norm. Such a representation of is known as a wavelet series. This implies that an orthonormal wavelet is self-dual. The integral wavelet transform is the integral transform defined as The wavelet coefficients are then given by Here, is called the binary dilation or dyadic dilation, and is the binary or dyadic position. Principle The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape, imposing a restriction on choosing suitable basis functions. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on the uncertainty principle of signal processing, where represents time and angular frequency (, where is ordinary frequency). The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysis windows is chosen, the larger is the value of . When is large, Bad time resolution Good frequency resolution Low frequency, large scaling factor When is small Good time resolution Bad frequency resolution High frequency, small scaling factor In other words, the basis function can be regarded as an impulse response of a system with which the function has been filtered. The transformed signal provides information about the time and the frequency. Therefore, wavelet-transformation contains information similar to the short-time-Fourier-transformation, but with additional special properties of the wavelets, which show up at the resolution in time at higher analysis frequencies of the basis function. The difference in time resolution at ascending frequencies for the Fourier transform and the wavelet transform is shown below. Note however, that the frequency resolution is decreasing for increasing frequencies while the temporal resolution increases. This consequence of the Fourier uncertainty principle is not correctly displayed in the Figure. This shows that wavelet transformation is good in time resolution of high frequencies, while for slowly varying functions, the frequency resolution is remarkable. Another example: The analysis of three superposed sinusoidal signals with STFT and wavelet-transformation. Wavelet compression Wavelet compression is a form of data compression well suited for image compression (sometimes also video compression and audio compression). Notable implementations are JPEG 2000, DjVu and ECW for still images, JPEG XS, CineForm, and the BBC's Dirac. The goal is to store image data in as little space as possible in a file. Wavelet compression can be either lossless or lossy. Using a wavelet transform, the wavelet compression methods are adequate for representing transients, such as percussion sounds in audio, or high-frequency components in two-dimensional images, for example an image of stars on a night sky. This means that the transient elements of a data signal can be represented by a smaller amount of information than would be the case if some other transform, such as the more widespread discrete cosine transform, had been used. Discrete wavelet transform has been successfully applied for the compression of electrocardiograph (ECG) signals In this work, the high correlation between the corresponding wavelet coefficients of signals of successive cardiac cycles is utilized employing linear prediction. Wavelet compression is not effective for all kinds of data. Wavelet compression handles transient signals well. But smooth, periodic signals are better compressed using other methods, particularly traditional harmonic analysis in the frequency domain with Fourier-related transforms. Compressing data that has both transient and periodic characteristics may be done with hybrid techniques that use wavelets along with traditional harmonic analysis. For example, the Vorbis audio codec primarily uses the modified discrete cosine transform to compress audio (which is generally smooth and periodic), however allows the addition of a hybrid wavelet filter bank for improved reproduction of transients. See Diary Of An x264 Developer: The problems with wavelets (2010) for discussion of practical issues of current methods using wavelets for video compression. Method First a wavelet transform is applied. This produces as many coefficients as there are pixels in the image (i.e., there is no compression yet since it is only a transform). These coefficients can then be compressed more easily because the information is statistically concentrated in just a few coefficients. This principle is called transform coding. After that, the coefficients are quantized and the quantized values are entropy encoded and/or run length encoded. A few 1D and 2D applications of wavelet compression use a technique called "wavelet footprints". Evaluation Requirement for image compression For most natural images, the spectrum density of lower frequency is higher. As a result, information of the low frequency signal (reference signal) is generally preserved, while the information in the detail signal is discarded. From the perspective of image compression and reconstruction, a wavelet should meet the following criteria while performing image compression: Being able to transform more original image into the reference signal. Highest fidelity reconstruction based on the reference signal. Should not lead to artifacts in the image reconstructed from the reference signal alone. Requirement for shift variance and ringing behavior Wavelet image compression system involves filters and decimation, so it can be described as a linear shift-variant system. A typical wavelet transformation diagram is displayed below: The transformation system contains two analysis filters (a low pass filter and a high pass filter ), a decimation process, an interpolation process, and two synthesis filters ( and ). The compression and reconstruction system generally involves low frequency components, which is the analysis filters for image compression and the synthesis filters for reconstruction. To evaluate such system, we can input an impulse and observe its reconstruction ; The optimal wavelet are those who bring minimum shift variance and sidelobe to . Even though wavelet with strict shift variance is not realistic, it is possible to select wavelet with only slight shift variance. For example, we can compare the shift variance of two filters: By observing the impulse responses of the two filters, we can conclude that the second filter is less sensitive to the input location (i.e. it is less shift variant). Another important issue for image compression and reconstruction is the system's oscillatory behavior, which might lead to severe undesired artifacts in the reconstructed image. To achieve this, the wavelet filters should have a large peak to sidelobe ratio. So far we have discussed about one-dimension transformation of the image compression system. This issue can be extended to two dimension, while a more general term - shiftable multiscale transforms - is proposed. Derivation of impulse response As mentioned earlier, impulse response can be used to evaluate the image compression/reconstruction system. For the input sequence , the reference signal after one level of decomposition is goes through decimation by a factor of two, while is a low pass filter. Similarly, the next reference signal is obtained by goes through decimation by a factor of two. After L levels of decomposition (and decimation), the analysis response is obtained by retaining one out of every samples: . On the other hand, to reconstruct the signal x(n), we can consider a reference signal . If the detail signals are equal to zero for , then the reference signal at the previous stage ( stage) is , which is obtained by interpolating and convoluting with . Similarly, the procedure is iterated to obtain the reference signal at stage . After L iterations, the synthesis impulse response is calculated: , which relates the reference signal and the reconstructed signal. To obtain the overall L level analysis/synthesis system, the analysis and synthesis responses are combined as below: . Finally, the peak to first sidelobe ratio and the average second sidelobe of the overall impulse response can be used to evaluate the wavelet image compression performance. Comparison with Fourier transform and time-frequency analysis Wavelets have some slight benefits over Fourier transforms in reducing computations when examining specific frequencies. However, they are rarely more sensitive, and indeed, the common Morlet wavelet is mathematically identical to a short-time Fourier transform using a Gaussian window function. The exception is when searching for signals of a known, non-sinusoidal shape (e.g., heartbeats); in that case, using matched wavelets can outperform standard STFT/Morlet analyses. Other practical applications The wavelet transform can provide us with the frequency of the signals and the time associated to those frequencies, making it very convenient for its application in numerous fields. For instance, signal processing of accelerations for gait analysis, for fault detection, for the analysis of seasonal displacements of landslides, for design of low power pacemakers and also in ultra-wideband (UWB) wireless communications. Time-causal wavelets For processing temporal signals in real time, it is essential that the wavelet filters do not access signal values from the future as well as that minimal temporal latencies can be obtained. Time-causal wavelets representations have been developed by Szu et al and Lindeberg, with the latter method also involving a memory-efficient time-recursive implementation. Synchro-squeezed transform Synchro-squeezed transform can significantly enhance temporal and frequency resolution of time-frequency representation obtained using conventional wavelet transform. See also Binomial QMF (also known as Daubechies wavelet) Biorthogonal nearly coiflet basis, which shows that wavelet for image compression can also be nearly coiflet (nearly orthogonal) Chirplet transform Complex wavelet transform Constant-Q transform Continuous wavelet transform Daubechies wavelet Discrete wavelet transform DjVu format uses wavelet-based IW44 algorithm for image compression Dual wavelet ECW, a wavelet-based geospatial image format designed for speed and processing efficiency Gabor wavelet Haar wavelet JPEG 2000, a wavelet-based image compression standard Least-squares spectral analysis Morlet wavelet Multiresolution analysis MrSID, the image format developed from original wavelet compression research at Los Alamos National Laboratory (LANL) S transform Scaleograms, a type of spectrogram generated using wavelets instead of a short-time Fourier transform Set partitioning in hierarchical trees Short-time Fourier transform Stationary wavelet transform Time–frequency representation Wavelet References External links Concise Introduction to Wavelets by René Puschinger Wavelets Functional analysis Signal processing Image compression Data compression
Wavelet transform
Mathematics,Technology,Engineering
2,306
75,632,051
https://en.wikipedia.org/wiki/Neptunium%28III%29%20bromide
Neptunium(III) bromide is a bromide of neptunium, with the chemical formula of NpBr3. Preparation Neptunium(III) bromide can be prepared by reacting neptunium dioxide and aluminium bromide: Properties Neptunium(III) bromide is a green solid. It can crystallize in two crystal systems: α-NpBr3 is hexagonal with lattice parameters a = 791.7 pm and c = 438.2 pm. It has the same structure as uranium trichloride. β-NpBr3 is orthorhombic with lattice parameters a = 411 pm, b = 1265 pm and c = 915 pm. It has the same structure as the bromides from plutonium to californium. Neptunium(III) bromide also has a green hexahydrate, which is monoclinic. Reactions At 425 °C, neptunium(III) bromide bromide can be further brominated by bromine to form neptunium(IV) bromide. References External reading Neptunium(III) compounds Bromides Actinide halides
Neptunium(III) bromide
Chemistry
250
2,953,344
https://en.wikipedia.org/wiki/Stagnation%20pressure
In fluid dynamics, stagnation pressure, also referred to as total pressure, is what the pressure would be if all the kinetic energy of the fluid were to be converted into pressure in a reversable manner.; it is defined as the sum of the free-stream static pressure and the free-stream dynamic pressure. The Bernoulli equation applicable to incompressible flow shows that the stagnation pressure is equal to the dynamic pressure and static pressure combined. In compressible flows, stagnation pressure is also equal to total pressure as well, provided that the fluid entering the stagnation point is brought to rest isentropically. Stagnation pressure is sometimes referred to as pitot pressure because the two pressures are equal. Magnitude The magnitude of stagnation pressure can be derived from Bernoulli equation for incompressible flow and no height changes. For any two points 1 and 2: The two points of interest are 1) in the freestream flow at relative speed where the pressure is called the "static" pressure, (for example well away from an airplane moving at speed ); and 2) at a "stagnation" point where the fluid is at rest with respect to the measuring apparatus (for example at the end of a pitot tube in an airplane). Then or where: is the stagnation pressure is the fluid density is the speed of fluid is the static pressure So the stagnation pressure is increased over the static pressure, by the amount which is called the "dynamic" or "ram" pressure because it results from fluid motion. In our airplane example, the stagnation pressure would be atmospheric pressure plus the dynamic pressure. In compressible flow however, the fluid density is higher at the stagnation point than at the static point. Therefore, can't be used for the dynamic pressure. For many purposes in compressible flow, the stagnation enthalpy or stagnation temperature plays a role similar to the stagnation pressure in incompressible flow. Compressible flow Stagnation pressure is the static pressure a gas retains when brought to rest isentropically from Mach number M. or, assuming an isentropic process, the stagnation pressure can be calculated from the ratio of stagnation temperature to static temperature: where: is the stagnation pressure is the static pressure is the stagnation temperature is the static temperature is the ratio of specific heats The above derivation holds only for the case when the gas is assumed to be calorically perfect (specific heats and the ratio of the specific heats are assumed to be constant with temperature). See also Hydraulic ram Stagnation temperature Notes References L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London. Cengel, Boles, "Thermodynamics, an engineering approach, McGraw Hill, External links Pitot-Statics and the Standard Atmosphere F. L. Thompson (1937) The Measurement of Air Speed in Airplanes, NACA Technical note #616, from SpaceAge Control. Fluid dynamics
Stagnation pressure
Chemistry,Engineering
635
2,346,537
https://en.wikipedia.org/wiki/Jackscrew
A jackscrew, or screw jack, is a type of jack that is operated by turning a leadscrew. It is commonly used to lift moderate and heavy weights, such as vehicles; to raise and lower the horizontal stabilizers of aircraft; and as adjustable supports for heavy loads, such as the foundations of houses. Description A screw jack consists of a heavy-duty vertical screw with a load table mounted on its top, which screws into a threaded hole in a stationary support frame with a wide base resting on the ground. A rotating collar on the head of the screw has holes into which the handle, a metal bar, fits. When the handle is turned clockwise, the screw moves further out of the base, lifting the load resting on the load table. In order to support large load forces, the screw is usually formed with Acme threads. Advantages An advantage of jackscrews over some other types of jack is that they are self-locking, which means when the rotational force on the screw is removed, it will remain motionless where it was left and will not rotate backwards, regardless of how much load it is supporting. This makes them inherently safer than hydraulic jacks, for example, which will move backwards under load if the force on the hydraulic actuator is accidentally released. Mechanical advantage The ideal mechanical advantage of a screw jack, the ratio of the force the jack exerts on the load to the input force on the lever ignoring friction is where is the force the jack exerts on the load. is the rotational force exerted on the handle of the jack is the length of the jack handle, from the screw axis to where the force is applied is the lead of the screw. The screw jack consists of two simple machines in series; the long operating handle serves as a lever whose output force turns the screw. So the mechanical advantage is increased by a longer handle as well as a finer screw thread. However, most screw jacks have large amounts of friction which increase the input force necessary, so the actual mechanical advantage is often only 30% to 50% of this figure. Limitations Screw jacks are limited in their lifting capacity. Increasing load increases friction within the screw threads. A fine pitch thread, which would increase the advantage of the screw, also reduces the speed of which the jack can operate. Using a longer operating lever soon reaches the point where the lever will simply bend at its inner end. Screw jacks have now largely been replaced by hydraulic jacks. This was encouraged in 1858 when jacks by the Tangye company to Bramah's hydraulic press concept were applied to the successful launching of Brunel's , after two failed attempts by other means. The maximum mechanical advantage possible for a hydraulic jack is not limited by the limitations on screw jacks and can be far greater. After World War II, improvements to the grinding of hydraulic rams and the use of O ring seals reduced the price of low-cost hydraulic jacks and they became widespread for use with domestic cars. Screw jacks still remain for minimal-cost applications, such as the little-used tyre-changing jacks supplied with cars, or where their self-locking property is important, such as for horizontal stabilizers on aircraft. Applications The large area of sliding contact between the screw threads means jackscrews have high friction and low efficiency as power transmission linkages, around 30%–50%. So they are not often used for continuous transmission of high power, but more often in intermittent positioning applications. In heavy-duty applications, such as screw jacks, a square thread or buttress thread is used, because it has the lowest friction and wear. Industrial and technical applications In technical applications, such as actuators, an Acme thread is used, although it has higher friction, because it is easy to manufacture, wear can be compensated for, it is stronger than a comparably sized square thread and it makes for smoother engagement. The ball screw is a more advanced type of leadscrew that uses a recirculating-ball nut to minimize friction and prolong the life of the screw threads. The thread profile of such screws is approximately semicircular (commonly a "gothic arch" profile) to properly mate with the bearing balls. The disadvantage to this type of screw is that it is not self-locking. Ball screws are prevalent in powered leadscrew actuators. Aviation Jackscrews are also used extensively in aircraft systems to raise and lower horizontal stabilizers. The failure of a jackscrew on a Yakovlev Yak-42 airliner due to design flaws resulted in the crash of Aeroflot Flight 8641 in 1982. The failure of a jackscrew on a McDonnell Douglas MD-80 due to deficient maintenance brought down Alaska Airlines Flight 261 in 2000. A MRAP armoured vehicle being transported aboard National Airlines Flight 102 in 2013, a Boeing 747-400BCF freighter, broke loose immediately after takeoff and smashed through the rear bulkhead. Both flight recorders were knocked offline, hydraulic lines were severed and most critically, the horizontal stabilizer actuator’s jackscrew was destroyed, rendering the aircraft uncontrollable. Machinist's jacks A machinist's jack is a miniature screw jack used to support protruding parts of a workpiece or to balance clamping forces on that workpiece during machining operations. Aside from their size, these frequently look no different from the screw jacks used to lift buildings off their foundations. Machinist's jacks can be as simple as a threaded spacer with a bolt in it to serve as a jackscrew. In electronic connectors The term jackscrew is also used for the captive screws that draw the two parts of some electrical connectors together and hold them mated. These are commonly encountered on D-subminiature connectors, where they serve primarily to prevent accidental disconnection. On larger connectors such as the one illustrated, the jack screws also help align the connectors and overcome the large frictional forces involved in inserting or removing the connector. When unscrewed, they allow the connector halves to be taken apart. Jackscrews in electrical connectors may have ordinary screw heads or extended heads designed as thumbscrews. The idea of incorporating jack screws into electrical connectors was not considered novel in the late 1950s and early 1960s. Some patents from that era show pairs of jackscrews on opposite sides of a multi-pin connector. Another shows a single central jackscrew. These patents mention the phrase "jack screw" incidentally, without asserting a claim to the idea. Jack screws may have either male or female threads, and on some connectors, the genders of the screws as well as various alignment pins may be mixed in order to prevent the wrong connector from being connected to the wrong socket. See also Acrow prop Ball screw Leadscrew Roller screw References Mechanisms (engineering) Screws Actuators Construction equipment bg:Крик
Jackscrew
Engineering
1,439
8,179,780
https://en.wikipedia.org/wiki/Clifford%20W.%20Holmes%20Award
The Clifford W. Holmes Award is presented annually near Big Bear City, California at the RTMC Astronomy Expo to an individual for a significant contribution to popularizing astronomy. Established in 1978 by Richard Poremba as the Astronomy for America Award, it was renamed for Clifford W. Holmes, the founder of the Riverside Telescope Makers Conference (RTMC) in 1980. Awardees Recipients of the award are: 1978: Paul Zurakowski 1979: Arthur Leonard 1980: Robert E. Cox 1981: Richard Berry 1982: Dennis di Cicco 1983: John Dobson 1984: Jim Jacobson 1985: Arthur Leonard 1986: Bob Schalck 1987: Clyde Tombaugh 1988: Kevin Medlock 1989: David H. Levy 1990: Dick Buchroeder, () 1991: Rick Shaffer 1992: Ashley McDermott 1993: John Sanford 1994: Donald C. Parker 1995: Don Machholz 1996: Gil Clark 1997: Randall Wilcox 1998: Randy Johnson 1999: William Seavey 2000: Tom Cave 2001: Scott W. Roberts 2002: Ed Krupp 2003: Steve Edberg 2004: Dean Ketelsen 2005: Mike Simmons 2006: Al Fink 2007: Dave Rodrigues 2008: Laura and Bob Eklund 2009: Don Nicholson 2010: David Crawford () 2011: Robert Victor 2012: Robert D. Stephens 2013: Jim Benet 2014: Jane Houston-Jones 2015: Terri Lappin () 2016: Randy and Pamela Shivak See also List of astronomy awards References External links RTMC Astronomy Expo Astronomy prizes
Clifford W. Holmes Award
Astronomy,Technology
305
67,472,830
https://en.wikipedia.org/wiki/Jungle%20chip
A jungle chip, or jungle IC, is an integrated circuit (IC or "chip") found in most analog televisions of the 1990s. It takes a composite video signal from the radio frequency receiver electronics and turns it into separate RGB outputs that can be sent to the cathode ray tube to produce a display. This task had previously required separate analog circuits. Advanced versions generally had a second set of inputs in RGB format that were used to overlay on-screen display imagery. These would be connected to a microcontroller that would handle operations like tuning, sleep mode and running the remote control. A separate input called "blanking" switched the jungle outputs between the two inputs on the fly. This was normally triggered at a fixed location on the screen, creating rectangular areas with the digital data overlaying the television signal. This was used for on-screen channel displays, closed captioning support, and similar duties. The internal RGB inputs have led to such televisions having a revival in the retrocomputing market. By running connectors from the RGB pins on the jungle chip to connectors added by the user, typically RCA jacks on the back of the television case, and then turning on the blanking switch permanently, the system is converted to an RGB monitor. Since early computers output signals with television timings, NTSC or PAL, using a jungle chip television avoids the need to provide separate timing signals. This contrasts with multisync monitors or similar designs that do not have any "built-in" timing and have separate inputs for these signals. Examples of jungle chips include the Motorola MC65585, Philips TDA6361 and Sony CXA1870. References Integrated circuits Analog video connectors
Jungle chip
Technology,Engineering
357
8,469,059
https://en.wikipedia.org/wiki/TWiT.tv
TWiT.tv is a podcast network that broadcasts technology-focused podcasts, founded by broadcaster and author Leo Laporte in 2005, and run by his wife and company CEO Lisa Laporte. The network began operation in April 2005 with the launch of This Week in Tech. Security Now was the second podcast on the network, debuting in August of that year. As of January 2024, the network hosts 14 podcasts; however, due to declining advertisement sales, some are being discontinued, or are only available with a Club TWiT subscription and the TWiT studio was closed in August 2024. Podcasts include This Week in Tech, Security Now, and MacBreak Weekly. TWiT founder and owner Leo Laporte, in an October 2009 speech, stated that it grossed revenues of $1.5 million per year, while costs were around $350,000. In November 2014, during an interview with American Public Media's Marketplace Leo Laporte stated that TWiT makes $6 million in ad revenue a year from 5 million TWiT podcasts downloaded each month, mostly in the form of audio, and that 3,000 to 4,000 people watch its live-streamed shows. On March 18, 2015, prior to the filming of This Week in Google, Leo Laporte stated that TWiT expects to make $7 million in revenue in fiscal year 2015, and made "almost" $10 million in revenue in 2016. TWiT gets its name from its first and flagship podcast, This Week in Tech. The logo design originated from a traditional logic gate symbol of an "AND gate" turned on its side. Voiceovers are provided by Jim Cutler. Programming TWiT's podcasts are centered around technology and technology news. They are hosted by journalists with knowledge in their coverage areas. Shows The TWiT Network is host to the following shows * Video access only via Club TWiT membership. Litigation In May 2017, Twitter announced that it would deliver original video content on its platform. Lawyers from TWiT believed this violated a spoken agreement between Leo Laporte and Twitter co-founder Evan Williams made in 2009, and infringed on TWiT's trademark. TWiT tried to informally resolve the trademark issue, and in January 2018 filed a trademark infringement lawsuit against Twitter. In March 2018 Twitter filed a motion to dismiss. On May 30, 2018, US Magistrate Judge Jaqueline Scott Corley granted Twitters' motion to dismiss the case. The judge found that TWIT's discussions with Twitter "do not support a plausible inference that Twitter agreed to never offer audio or video content under the Twitter brand." Awards This Week in Tech was the recipient of the 2005, 2008, and 2010 People's Choice Podcast Awards in the Technology category and Best Video Podcast in 2009 and 2011. Tech News Today was the recipient of the 2012 International Academy of Web Television award for Best News Web Series. It also won the People's Choice Podcast Awards in the Technology category in 2011 and 2013. Security Now was the recipient of the 2007 People's Choice Podcast Awards in the Technology category. This Week in Computer Hardware, Home Theater Geeks, NSFW, This Week in Tech, MacBreak Weekly, TWiT Live Specials, iPad Today, Tech News Today, The Tech Guy, This Week in Google, and Windows Weekly were named "Best of 2010 in Podcasts" by iTunes Rewind. In 2011, This Week in Tech was named "Best Technology Podcast", and TWiT Photo was named "Best New Technology Podcast" by iTunes Rewind. In 2017, Triangulation was awarded the first "Best Podcast: Technology" Webby Award for the episode Leo Laporte Talks with Edward Snowden's Lawyer, and Leo Laporte was chosen as an Honoree for "Podcasts: Best Host" by The Webby Awards See also This Week in Tech References External links 2005 establishments in California Internet properties established in 2005 Television channels and stations established in 2005 Companies based in Sonoma County, California Petaluma, California Podcasting companies Internet television channels Computer television series Mass media in Sonoma County, California
TWiT.tv
Technology
856
23,569,604
https://en.wikipedia.org/wiki/Hotel%20television%20systems
Hotel television systems (sometimes also referred to as hotel TV) are the in-suite television content presented in hotel rooms, other hotel environments and in the hospitality industry for in-room entertainment, as well as hospitals, assisted living, senior care and nursing homes. These services may be free for the guest or paid, depending on the service and the individual hotel's or hotel chain's policy. Generally these services are controlled by using the remote control. Services Hotel television is generally available as free to guest services, which may include local channels and satellite or cable programming, or as interactive television, which provides services such as video on demand or any other paid services including movies, music, adult content, and other services. In some cases hotel TV also means a bundle of interactive services that are made available on a guest's TV screen such as a hotel welcome screen with hotel information, hotel services, an information portal with weather, news and local attractions, video games, internet applications, internet television, movie rental services, and order and shopping for the hotel's amenities. In other cases, some hotels may have information channels consisting of looping videos promoting the local area. Cable and satellite television systems Commonly a hotel television system distributing satellite television signal is known as a satellite master antenna TV (SMATV) system. In an L-band distribution system television signal is sent from the satellite dish to a panel in a distribution closet to a set top box in each room which decrypts the digital signal via a coaxial network. In a headend type of system, the signal is encrypted by a Qam at the headend to prevent piracy and then distributed via a COM3000 from Technicolor, or similar hotel television headend. In an IPTV system, all video, voice and data are transmitted over an internal hotel IP network. In cable or satellite TV systems, signal may be distributed via a coaxial network of IP networks either to a set-top box in each room through an L-band type system or directly to Pro:Idiom encrypted television sets through a headend type hotel television system. Signal distribution Satellite television, cable television and over-the-air (OTA) signals as well as locally generated programming such as hotel guest welcome screens and other hotel information and services can be distributed via an L band type system, COM3000 HD/4K Pro:Idiom headend from Technicolor, or an IPTV type distribution system. In most hotels, a television signal provided by a satellite television or cable television provider or OTA antenna is transmitted over a hotel coaxial cable network. Most hotels today are wired only with coaxial cables. Some newer hotels are pre-wired with UTP or CAT-5/6 cabling, which enables IP-based hotel television services. For hotels wired with coaxial cable, technology has emerged recently which enables some to take advantage of IP-based signal transmission over coax cables. See also Cable television headend L band References Satellite Broadcasting & Communications Association Satellite Broadcasting & Communications Association European Satellite Operators Association European Satellite Operators Association Hotel terminology Television technology
Hotel television systems
Technology
642
62,615,548
https://en.wikipedia.org/wiki/Martin%20Medal
The Martin Medal is an award given for outstanding contributions to the advancement of separation science. The award is presented by The Chromatographic Society, a UK-based organization promoting all aspects of chromatography and related separation techniques. The award is named after Professor Archer J.P Martin, who contributed to the invention of partition chromatography, and shared the 1952 Nobel Prize in Chemistry. Award winners Past winners of the Martin Medal are: Robert Kennedy (2019) Jean-Luc Veuthey (2018) Andreas Manz (2017) Ian Wilson & Peter Myers (2016) Pavel Jandera (2015) Nobuo Tanaka (2014) Günther Bonn & Frantisek Svec (2013) Edward S. Yeung (2012) Peter J. Schoenmakers (2011) Peter Carr (2010) Wolfgang F. Lindner (2009) Ron Majors & Johan Roeraade (2007) Jim Waters (2006) Vadim A. Davankov (2005) Terry Berger (2004) Jack Henion (2003) Paul R. Haddad & Werner Engewald (2002) John Michael Ramsey (2001) Klaus Mosbach & William S. Hancock (2000) Hans Poppe & Geoffrey Eglinton, FRS (1999) Albert Zlatkis (1998, awarded posthumously) Will Jennings & Joseph Jack Kirkland (1997) Milton L. Lee (1996) Milos Novotny & Shigeru Terabe (1995) Pat Sandra & Csaba Horvath (1994) Hans Engelhardt, Fred E. Regnier, & Klaus K. Unger (1993) Irving Wainer & James W. Jorgenson (1992) Dai E. Games, Barry L. Karger, Daniel W. Armstrong, & Dennis H. Desty (1991) Egil Jellum, William Pirkle, & Carl A. Cramers (1990) Jon Calvin Giddings, Udo. A Th Brinkman, J. F. K. Huber, Rudolf E. Kaiser, & Lloyd R. Snyder (1986) Ervin Kovats & John Knox (1985) C. E. Roland Jones & Arnaldo L. Liberti (1984) Gerhard Schomburg & Ralph Stock (1983) Edward R. Adlard, Leslie S. Ettre, Courtney S. G. Phillips, & Raymond P. W. Scott (1982) G. A. Peter Tuey & Georges Guiochon (1980) Ernst Bayer & C. E. H. Knapman (1978) References Electrophoresis Chromatography
Martin Medal
Chemistry,Biology
526
211,842
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Borwein%20constant
The Erdős–Borwein constant, named after Paul Erdős and Peter Borwein, is the sum of the reciprocals of the Mersenne numbers. By definition it is: Equivalent forms It can be proven that the following forms all sum to the same constant: where σ0(n) = d(n) is the divisor function, a multiplicative function that equals the number of positive divisors of the number n. To prove the equivalence of these sums, note that they all take the form of Lambert series and can thus be resummed as such. Irrationality In 1948, Erdős showed that the constant E is an irrational number. Later, Borwein provided an alternative proof. Despite its irrationality, the binary representation of the Erdős–Borwein constant may be calculated efficiently. Applications The Erdős–Borwein constant comes up in the average case analysis of the heapsort algorithm, where it controls the constant factor in the running time for converting an unsorted array of items into a heap. References External links Mathematical constants Irrational numbers Borwein constant
Erdős–Borwein constant
Mathematics
236
45,458,060
https://en.wikipedia.org/wiki/End-sequence%20profiling
End-sequence profiling (ESP) (sometimes "Paired-end mapping (PEM)") is a method based on sequence-tagged connectors developed to facilitate de novo genome sequencing to identify high-resolution copy number and structural aberrations such as inversions and translocations. Briefly, the target genomic DNA is isolated and partially digested with restriction enzymes into large fragments. Following size-fractionation, the fragments are cloned into plasmids to construct artificial chromosomes such as bacterial artificial chromosomes (BAC) which are then sequenced and compared to the reference genome. The differences, including orientation and length variations between constructed chromosomes and the reference genome, will suggest copy number and structural aberration. Artificial chromosome construction Before analyzing target genome structural aberration and copy number variation (CNV) with ESP, the target genome is usually amplified and conserved with artificial chromosome construction. The classic strategy to construct an artificial chromosome is bacterial artificial chromosome (BAC). Basically, the target chromosome is randomly digested and inserted into plasmids which are transformed and cloned in bacteria. The size of fragments inserted is 150–350 kb. Another commonly used artificial chromosome is fosmid. The difference between BAC and fosmids is the size of the DNA inserted. Fosmids can only hold 40 kb DNA fragments, which allows a more accurate breakpoint determination. Structural aberration detection End sequence profiling (ESP) can be used to detect structural variations such as insertions, deletions, and chromosomal rearrangement. Compare to other methods that look at chromosomal abnormalities, ESP is particularly useful to identify copy neutral abnormalities such as inversions and translocations that would not be apparent when looking at copy number variation. From the BAC library, both ends of the inserted fragments are sequenced using a sequencing platform. Detection of variations is then achieved by mapping the sequenced reads onto a reference genome. Inversion and translocation Inversions and translocations are relatively easy to detect by an invalid pair of sequenced-end. For instance, a translocation can be detected if the paired-ends are mapped onto different chromosomes on the reference genome. Inversion can be detected by divergent orientation of the reads, where the insert will have two plus-end or two minus-end. Insertion and deletion In the case of an insertion or a deletion, mapping of the paired-end is consistent with the reference genome. But the read are disconcordant in apparent size. The apparent size is the distance of the BAC sequenced-ends mapped in the reference genome. If a BAC has an insert of length (l), a concordant mapping will show a fragment of size (l) in the reference genome. If the paired-ends are closer than distance (l), an insertion is suspected in the sampled DNA. A distance of (l< μ-3σ) can be used as a cut-off to detect an insertion, where μ is the mean length of the insert and σ is the standard deviation. In case of a deletion, the paired-ends are mapped further away in the reference genome compared to the expected distance (l> μ-3σ). Copy number variation In some cases, discordant reads can also indicate a CNV for example in sequences repeats. For larger CNV, the density of the reads will vary accordingly to the copy number. An increase of copy numbers will be reflected by increasing mapping of the same region on the reference genome. ESP history ESP was first developed and published in 2003 by Dr. Collins and his colleagues in University of California, San Francisco. Their study revealed the chromosome rearrangements and CNV of MCF7 human cancer cells at a 150kb resolution, which is much more accurate compared to both CGH and spectral karyotyping at that time. In 2007, Dr. Snyder and his group improved the ESP to 3kb resolution by sequencing both pairs of 3-kb DNA fragments without BAC construction. Their approach is able to identify deletions, inversions, insertions with an average breakpoint resolution of 644bp, which close to the resolution of polymerase chain reaction (PCR). ESP applications Various bioinformatics tools can be used to analyze end-sequence profiling. Common ones include BreakDancer, PEMer, Variation Hunter, common LAW, GASV, and Spanner. ESP can be used to map structural variation at high-resolution in disease tissue. This technique is mainly used on tumor samples from different cancer types. Accurate identification of copy neutral chromosomal abnormalities is particularly important as translocation can lead to fusion proteins, chimeric proteins, or misregulated proteins that can be seen in tumors. This technique can also be used in evolution studies by identifying large structural variation between different populations. Similar methods are being developed for various applications. For example, a barcoded Illumina paired-end sequencing (BIPES) approach was used to assess microbial diversity by sequencing the 16S V6 tag. Advantages and limitations Resolution of structural variation detection by ESP has been increased to the similar level as PCR, and can be further improved by selection of more evenly sized DNA fragments. ESP can be applied for either with or without constructed artificial chromosome. With BAC, precious samples can be immortalized and conserved, which is particularly important for small quantity of smalls which are planned for extensive analyses. Furthermore, BACs carrying rearranged DNA fragments can be directly transfected in vitro or in vivo to analyze the function of these arrangements. However, BAC construction is still expensive and labor-intensive. Researchers should be really careful to choose which strategy they need for particular project. Because ESP only looks at short paired-end sequences, it has the advantage of providing useful information genome-wide without the need for large-scale sequencing. Approximately 100-200 tumors can be sequenced at a resolution greater than 150kb when compared to sequencing an entire genome. References See also Chromosome abnormalities Chromosomal inversion Insertion (genetics) Deletion (genetics) Chromosomal translocation Chromosome abnormality Copy-number variation Molecular biology Laboratory techniques Molecular biology techniques DNA sequencing
End-sequence profiling
Chemistry,Biology
1,292
35,443,087
https://en.wikipedia.org/wiki/Employee%20motivation
Employee motivation is an intrinsic and internal drive to put forth the necessary effort and action towards work-related activities. It has been broadly defined as the "psychological forces that determine the direction of a person's behavior in an organisation, a person's level of effort and a person's level of persistence". Also, "Motivation can be thought of as the willingness to expend energy to achieve a goal or a reward. Motivation at work has been defined as 'the sum of the processes that influence the arousal, direction, and maintenance of behaviors relevant to work settings'." Motivated employees are essential to the success of an organization as motivated employees are generally more productive at the work place. Motivational techniques Motivation is the impulse that an individual has in a job or activity to reaching an end goal. There are multiple theories of how best to motivate workers, but all agree that a well-motivated work force means a more productive work force. Taylorism Fredrick Winslow Taylor was one of the first theorist to attempt to understand employee motivation. His theory of scientific management, also referred to as Taylorism, analyzes the productivity of the workforce. Taylor's basic theory of motivation, is that workers are motivated by money. He viewed employees not as individuals, but as pieces of a larger workforce; in doing so his theory stresses that giving employee's individual tasks, supplying them with the best tools and paying them based on their productivity was the best way to motivate them. Taylor's theory developed in the late 1890s and can still be seen today in industrial engineering and manufacturing industries. Hawthorne effect In the mid 1920s another theorist, Elton Mayo along with Fritz Roethlisberger and William Dickson from the Harvard Business School, began studying the workforce. His study of the Hawthorne Works, lead him to his discovery of the Hawthorne effect. The Hawthorne effect is the idea that people change their behavior as a reaction to being observed. Mayo found that employee's productivity increased when they knew they were being watched. He also found that employees were more motivated when they were allowed to give input on their working conditions and that input was valued. Mayo's research and motivational theories were the start of the Human Relations school of management. However, today there are studies and systematic reviews are conducted to find out whether the Hawthorne effect exists, and the level of impact it can make under certain conditions. Gratitude in advance Thanking employees can help them, especially if done before they engage in difficult or distressing tasks. A study conducted in 2024 showed that gratitude can help increase employees’ sense of social worth regarding themselves, which helps them to persist challenges more effectively than if the gratitude is expressed after the task. Job design The design of an employee's job can have a significant effect on their job motivation. Job design includes designing jobs that create both a challenging and interesting task for the employee and is effective and efficient for getting the job done. Four approaches to job design are: Job Simplification: The goal of this job design approach is to standardize and specialize tasks. This approach does not always lead to increased motivation because the jobs can become mundane over time. Job Enlargement: The goal of this job design approach is to combine tasks to give the employee a greater variety of work. Job Rotation: The goal of this job design approach is to move workers to different tasks periodically. Job Enrichment: The key to job design employee motivation, this approach aims to enhance the actual job by building up the employee through motivational factors. Several studies validate the effectiveness of using job design techniques to increase employee motivation. A study conducted by Campion and Thayer used a job design questionnaire to determine how job designs fostering motivation affected employees. Campion and Thayer found that jobs with more motivational features have lower effort requirements, a better well-being, and fewer health complaints. The study also found that jobs scoring high on the motivational subscale of the questionnaire contained employees who were more satisfied and motivated, had a higher rating pertaining to job performance, and had fewer absences. Hackman. conducted a study pertaining to work redesign and how redesigning work could improve productivity and motivation through job enlargement or enrichment. The study's results found that redesigning a job can improve the quality of the product or service that is provided, increase the quantity of work, and can increase work satisfaction and motivation. The last study on job design was conducted by Dunham, who wanted to determine if there was a relationship between job design characteristics and job ability and compensation requirements. Dunham believed organizations were overlooking job ability requirements and compensation when they enlarged or enriched employee's jobs. The study found that organizations were not taking into account the increased job ability requirements that job enrichments or enlargements entail nor were the organizations increasing compensation for employees who were given extra tasks and/or more complex tasks. Rewards Using rewards as motivators divides employee motivation into two categories: intrinsic and extrinsic motivation. Intrinsic rewards are internal, psychological rewards such as a sense of accomplishment or doing something because it makes one feel good. Extrinsic rewards are rewards that other people give to you such as a money, compliments, bonuses, or trophies. This applies to Douglas McGregor's Scientific Theory that formed Theory X, which applies to the extrinsic wants of employees. The basis for the motivation is supervision structure and money. Scientific Theory is based on the grounds that employees don't want to work so they have to be forced to do their job, and enticed with monetary compensation.Theory Y, also derived from McGregor's theory, says that employees are motivated by intrinsic or personal reward. With this theory different factors can be used to heighten the intrinsic benefit that employees are receiving at their job." Many studies have been conducted concerning how motivation is affected by rewards resulting in conflicting and inconsistent outcomes. Pierce, Cameron, Banko, and So conducted a study to examine how extrinsic rewards affect people's intrinsic motivation when the rewards are based on increasingly higher performance criteria. Pierce et al. found that rewarding people for meeting a graded level of performance, which got increasingly more difficult, spent more time on the study's activities and experienced an increase in intrinsic motivation. Participants who were not rewarded at all or only rewarded for maintaining a constant level of performance experienced less intrinsic motivation. Another study that examined the effects of extrinsic rewards on intrinsic motivation was conducted by Wiersma. Wiersma conducted a meta-analysis to summarize the inconsistent results of past studies. The meta-analysis by Wiersma concluded that when extrinsic rewards are given by chance, they reduce intrinsic motivation. This result is supported when task behavior is measured during a free-time period. However, it is not supported when task performance is measured when the extrinsic reward is in effect. Wiersma also found that these results cannot be generalized to all situations. A study conducted by Earn also examined the effects of extrinsic rewards on intrinsic motivation. Earn wanted to know if extrinsic rewards affected a person's intrinsic motivation based on the subject's locus of control. Earn found that pay increases decreased intrinsic motivation for subjects with an external locus of control whereas pay increases increased intrinsic motivation for subjects with an internal locus of control. The study also found that when the controlling aspect of the extrinsic reward was made pertinent by making pay dependent on a certain amount of performance, higher pay undermined the intrinsic motivation of subjects and their locus of control was not relevant. Intrinsic rewards: Job Characteristics Model The Job Characteristics Model (JCM), as designed by Hackman and Oldham attempts to use job design to improve employee intrinsic motivation. They show that any job can be described in terms of five key job characteristics: Skill Variety - the degree to which the job requires the use of different skills and talents Task Identity - the degree to which the job has contributed to a clearly identifiable larger project Task Significance - the degree to which the job affects the lives or work of other people Autonomy - the degree to which the employee has independence, freedom and discretion in carrying out the job Task Feedback - the degree to which the employee is provided with clear, specific, detailed, actionable information about the effectiveness of his or her job performance The JCM links the core job dimensions listed above to critical psychological states which results in increased employee intrinsic motivation. This forms the basis of this "employee growth-need strength." The core dimensions listed above can be combined into a single predictive index, called the Motivating Potential Score. Employee participation 1. Increase employee participation by implementing quality control "circles". Quality control circles involve a group of five to ten problem solving employees that come together to solve work-related problems such as reducing costs, solving quality problems, and improving production methods. Other benefits from quality control circles include an improved employee-management relationship, increased individual commitment, and more opportunities for employee expression and self-development. A study by Marks et al. focused on assessing the effect that quality circles had on participating employees and found that the attitudes of employees who participated in quality circles were influenced in the areas concerning participation, decision making, and group communication. Although group communication was influenced, communication through the organization as a whole was not and neither was employee's personal responsibility for their work. The results of this study suggest that quality circles can provide employees with informational and social support that can help increase their motivation. 2. Increase motivation through employee participation by using open-book management. Open-book management is when a company shares important financial data with employees. Sharing the information empowers employees by putting trust into them. Employees become personally and meaningfully involved with the organization beyond just doing their assigned tasks, which increases their motivation and production Open book management is a four-step process. The first step involves employers sharing financial data with their employees. Employees need to know how the company, as a whole, is doing financially. Next, employers must teach their employees how to read and interpret the financial data. Employees can look at all the data a company gives them; however, to understand the data, they must know how to interpret the numbers. Third, employees have to be empowered to make necessary changes and decisions for the success of the organization. Employers should treat their employees like partners to promote increased employee motivation. The last step involves employers paying their employees a fair share of profits through bonuses and incentives. Bonus numbers must be attached to numbers that employees see regularly and can influence the financial data. With these steps in mind, the friction between employees and between employee/management can be drastically reduced. Four factors must exist for any employee participation program to be successful: Have a profit-sharing or gain-sharing plan where both the employer and employee benefit Implement a long-term employment relationship to instill job security Make a concerted effort to build and maintain group cohesiveness Provide protection of the individual employee's rights "Work motivation is a set of energetic forces that originate both within as well as beyond an individual’s being, to initiate work-related behavior and to determine its form, direction, intensity, and duration” (Pinder, 1998, p. 11). Quality-of-work-life programs Work-life balance is an employee's perception of how a proper balance between personal time, family care, and work are maintained with minimal conflict. Employers can use work-life balance as a motivational technique by implementing quality-of-work-life programs. Examples of such programs include flextime, workplace wellness, and family support. Flexible work schedules can allow an employee to work whenever they can as long as a certain number of hours are worked each week and some employers allow their employees to work from home. Sometimes employers utilize flextime schedules that allow employees to arrive to work when they choose within specified limits. A wellness program can involve having an exercise facility, offering counseling, or even having programs set up to help employees lose weight or stop smoking cigarettes. Family support programs involve help with parenting, childcare, and some programs allow employees to leave for family purposes. One study found that men often identify themselves with their career and work roles while women often identified themselves with the roles of mother, wife, friend, and daughter. The Sloan Foundation found that even though women enjoy working as much as men, women prefer to work nights and weekends if time needs to be made up instead of cutting their hours. A study conducted by the Alliance for Work-Life Progress surveyed employees to find out the type of workplace flexibility employees say they would like to use in the following year. Burrus et al. found that 71 percent of people want an occasional opportunity to adjust their schedule, 57 percent want to work from a location other than their office, 73 percent want to make their work-life flexibility arrangement official, and 12 percent want to work fewer hours. Employee Engagement A motivated employee becomes engaged in their workplace. Employee engagement is an important part of an organization's success. Research has found that organizations with engaged employees have three times higher profit margins compared to organizations with disengaged employees. Shareholder returns, operating income, and revenue growth have also had higher financial performance in employee engaged organizations. In addition, employee engagement is linked to lower absenteeism within an organization. Employers who practice employee motivation and engagement techniques in their organization will likely see an increase in overall business performance. Motivational theories Maslow's hierarchy of needs Abraham Maslow viewed motivation as being based on a hierarchy of needs, of which a person cannot move to the next level of needs without satisfying the previous level. Maslow's hierarchy starts at the lowest level of needs, basic physiological needs. Basic physiological needs include air, water, and food. Employers who pay at least a minimal living wage will meet these basic employee needs The next level of needs is referred to as safety and security needs. This level includes needs such as having a place to live and knowing one is safe. Employers can meet these needs by ensuring employees are safe from physical, verbal and/or emotional hazards and have a sense of job security. The third level of needs is social affiliation and belonging. This is the need to be social, have friends, and feel like one belongs and is loved. Implementing employee participation programs can help fulfill the need to belong. Rewards such as acknowledging an employee's contributions can also satisfy these social and love needs. The fourth level on the hierarchy is esteem needs. This level is described as feeling good about one's self and knowing that their life is meaningful, valuable, and has a purpose. Employers should use the job design technique to create jobs that are important to and cherished by the employee. These first four needs, Maslow called D-Needs (deficient). The last level Maslow described is called self-actualization. Maslow called this the B-Need (being). This level refers to people reaching their potential states of well-being. An employer who ensures that an employee is in the right job and has all other needs met will help the employee realize this highest need. "Maslow further expanded self-actualization into four needs: cognitive, aesthetic, self-actualization, and self-transcendence." Herzberg's two-factor theory Frederick Herzberg developed the two-factor theory of motivation based on satisfiers and dissatisfiers. Satisfiers are motivators associated with job satisfaction while dissatisfiers are motivators associated with hygiene or maintenance. Satisfiers include achievement, responsibility, advancement, and recognition. Satisfiers are all intrinsic motivators that are directly related to rewards attainable from work performance and even the nature of the work itself. Dissatisfiers are extrinsic motivators based on the work environment, and include a company's policies and administration such as supervision, peers, working conditions, and salary. Herzberg believed providing for hygiene and maintenance needs could prevent dissatisfaction but not contribute to satisfaction. Herzberg also believed that satisfiers hold the greatest potential for increased work performance. Work-life programs are a form of satisfier that recognizes the employee's life outside of work which, in turn, helps motivate the employee. Improving a job to make it more interesting can improve the overall satisfaction an employee is experiencing on the job. A dissatisfier looked at by employees is how relationships form with colleagues. Colleagues play an important role of the workplace as they are all interacting daily. Forming high quality relationships with peers can extrinsically improve employee motivation. Vroom's expectancy theory The expectancy theory of motivation was established by Victor Vroom with the belief that motivation is based on the expectation of desired outcomes. The theory is based on four concepts: valence, expectancy, instrumentality and force. Valence is the attractiveness of potential rewards, outcomes, or incentives. Expectancy is a person's belief that they will or will not be able to reach the desired outcome. Instrumentality is the belief that a strong performance will be well rewarded. Force is a person's motivation to perform. In general, people will work hard when they think that it is likely to lead to desired organizational rewards. Vroom thought that people are motivated to work toward a goal if they believe the goal is worthwhile and if they perceive that their efforts will contribute to the achievement of that goal. Force = Valence x Expectancy x Instrumentality Locke's goal theory As Human Relations management took hold, increasing intrinsic motivation and attending to individuals became a larger concern for employers. Increasing intrinsic motivation could be achieved through the Goal Setting Theory by Edwin A. Locke. Employers that set realistic and challenging goals for their employees create employee motivation. By allowing employees to engage in their job, and achieve satisfaction when reaching a goal it can entice them to want to keep setting new goals to reach new successes and yield superior performance. The theory is logical because employees are going to set more difficult goals but the goals will be attainable with increased effort. Once in the pattern of setting goals, employees can also develop goal commitment, where they are more likely to stick to jobs until they are finished. Employees that work alongside their employers in the goal-setting process have the intrinsic benefit of participating in decisions, which can lead to higher motivation as they are empowered in their workplace. As employees reach these personally set goals, management can reinforce those efforts by showing recognition toward their success. Locke and Latham's five goal setting principles Dr. Gary Latham collaborated with Edwin Locke to expand upon his goal setting theory of motivation with five key principles designed to motivate the accomplishment and completion of a particular objective. These five key principles align closely around the SMART goal setting strategy designed to define objectivity and achievability. The five key principles are: Clarity: Clear goals are measurable and not ambiguous which gives clear definition as to the expectations of the objective. Challenge: People are often motivated by the anticipated significance upon successful completion of the particular task. Commitment: There is a direct correlation between employees motivation to complete an objective and their involvement in establishing the goal and its boundaries. Feedback: Consistent feedback during the objective completion process provides clarity of expectations, ability to adjust difficulty, and the opportunity to gain recognition. Complexity: People in a highly demanding environment typically already have a high level of motivation, but it is important that the goal does not overwhelm the individual to maintain motivation Integration of Conventional with Islamic Theories The integration of Western management theories with Islamic principles, specifically the Maqasid Shariah, for employee motivation involves harmonizing individual rights and dignity with Islamic values. This synergy emphasizes social justice, fair treatment, continuous learning, and ethical leadership. By aligning professional development, teamwork, intrinsic motivation, work-life balance, and recognition with Islamic objectives, organizations can cultivate a workplace that not only prioritizes individual well-being but also upholds broader societal and moral dimensions, fostering motivation in accordance with Islamic principles. As a result, Abdullah et al. (2023) has developed a unique employee motivation index by fusing McClelland and Maqasid Shariah in their studies. See also Work motivation Suggestion system Employee offboarding References Employee relations Motivation
Employee motivation
Biology
4,127
4,838,711
https://en.wikipedia.org/wiki/Forest%20kindergarten
Forest kindergarten is a type of preschool education for children between the ages of three and six that is held almost exclusively outdoors. Whatever the weather, children are encouraged to play, explore and learn in a forest environment. The adult supervision is meant to assist rather than lead. It is also known as Waldkindergarten (in German), outdoor nursery, or nature kindergarten. Activities A forest kindergarten can be described as a kindergarten "without a ceiling or walls". The daycare staff and children spend their time outdoors, typically in a forest. A distinctive feature of forest kindergartens is the emphasis on play with objects that can be found in nature, rather than commercial toys. Despite these differences, forest kindergartens are meant to fulfill the same basic purpose as other nurseries, namely, to care for, stimulate, and educate young children. Each forest kindergarten is different, partly because the organisations are independently minded. But typical activities and goals may include: Location and organization Forest kindergartens operate mainly in woodland. There should be a building where children can shelter from extreme weather. They may also spend a small part of each day indoors, although that is more likely to be for administrative and organisational reasons, such as to provide a known location where parents can deliver and collect their children. If the woodland is too far away to walk, a vehicle might reluctantly be used for transport. Children are encouraged to dress for the weather, with waterproof clothes and warm layers, according to the climate. History In rural areas, and historical times, access to nature has not been a problem. Over the last century, with increasing urbanisation and "nature deficit disorder", there have been many changes in stance on outdoor education. The first forest kindergarten was created by Ella Flautau in Denmark in the early 1950s. The idea formed gradually as a result of her often spending time with her own and neighbors' children in a nearby forest, a form of daycare which elicited great interest among the neighborhood parents. The parents formed a group and created an initiative to establish the first forest kindergarten. In Sweden in 1957, an ex-military man, Gösta Frohm, created the idea of "Skogsmulle". "Skog" means wood in Swedish. "Mulle" is one of four fictional characters he created to teach children about nature, along with "Laxe" representing water, "Fjällfina" representing mountains and "Nova" representing an unpolluted nature. Forest schools based on Frohm's model, called "I Ur och Skur" (Rain or Shine Schools) moved the idea from occasional activities to formal nursery schools, being set up by Siw Linde in 1985. Juliet Robertson's review of Skogsmulle is a valuable modern-day summary. Nature kindergartens have existed in Germany since 1968 but the first forest kindergarten was first officially recognized as a form of daycare in 1993, enabling state subsidies to reduce the daycare fees of children who attended forest kindergarten. Since then, the forest kindergartens have become increasingly popular. As of 2005 there were approximately 450 forest kindergartens in Germany, some of which offer a mix of forest kindergarten and traditional daycare, spending their mornings in the forest and afternoons inside. By late 2017, the number of forest kindergartens in Germany surpassed 1,500. In 2009, the Forestry Commission Scotland (FCS) undertook a feasibility study to create a Forest Kindergarten pilot project in Glasgow and the Clyde Valley. This model is based upon empowering early years educators to lead weekly sessions in their local woodland or other greenspace using a child-centred approach. The first FCS Forest Kindergarten 3-day training took place in February 2012. In 2017 the course became a Scottish Qualification Award (SQA) at SCQF Level 7. This Forest Kindergarten training has now been embedded in various Early Years College courses within Scotland and delivered by Learning through Landscapes across the UK. This qualification will soon operate in the rest of the UK under NOCN Accreditation. Aotearoa New Zealand Enviroschools started in 2001, and often incorporate a Māori perspective, and Australia has bush or beach kinders (kindergartens) that provide an outdoor learning program. While there are similarities, it is important to note that Forest School and Forest Kindergarten are two distinct training programmes. LtL has produced a useful comparison of Forest Kindergarten and Forest School. From 2018 on all forest kindergartens are invited to celebrate the International Day of Forest Kindergarten every year on 3 May. Effects The fact that most forest kindergartens do not provide commercial toys that have a predefined meaning or purpose supports the development of language skills, as children verbally create a common understanding of the objects used as toys in the context of their play. Forest kindergartens are also generally less noisy than closed rooms, and noise has been shown to be a factor in the stress level of children and daycare professionals. For inner-city girls, having sight of a green space from home improves self-discipline, while the same effect was not noted for boys in the study as they were more likely to play further from home. Playing outside for prolonged periods has been shown to have a positive impact on children's development, particularly in the areas of balance and agility, but also manual dexterity, physical coordination, tactile sensitivity, and depth perception. According to these studies, children who attend forest kindergartens experience fewer injuries due to accidents and are less likely to injure themselves in a fall. A child's ability to assess risks improves, for example in handling fire and dangerous tools. Other studies have shown that spending time in nature improves attention and medical prognosis in women (see Attention Restoration Theory). Playing outdoors is said to strengthen the immune systems of children and daycare professionals. When children from German Waldkindergartens go to primary school, teachers observe a significant improvement in reading, writing, mathematics, social interactions and many other areas. Forest kindergartens have been recommended for young boys, who may not yet demonstrate the same fluency in typical school tasks as their female counterparts, to prevent negative self-esteem and associations with school. Roland Gorges found that children who had been to a forest kindergarten were above average, compared by teachers to those who had not, in all areas of skill tested. In order of advantage, these were: Motivation Helicopter parenting is becoming more clearly recognised in the culture of fear of today's risk averse society. While some parents rush to 'wrap their children in cotton wool', others see outdoor play and forest kindergartens as a way to develop a mature and healthy outlook on life, as well as practical skills and health. Doing this at a young age is hoped to bring lifelong benefits to the child. It is consistent with the notions of slow parenting, the "idle parent" and "free range kids". See also Free-range parenting German Forest Outdoor education Urban forest Adventure playground Helicopter parent Slow parenting Wandervogel References Related organisations American Forest Kindergarten Association, U.S. Forest Kindergarten Model based on the Waldkindergarten and Nordic Models. Learning through Landscapes Is a non-profit organisation providing SQA Accredited Forest Kindergarten Awards in the UK. Eastern Region Association of Forest and Nature Schools (ERAFANS), a 501(c)3 non-profit organization that offers nature-based professional development to teachers and childcare providers. Play England, charity raising awareness of the value of play PlayScotland charity encouraging children to play Association of all Forest Kindergartens in Czech Republic Natural Start Alliance in United States Alternative education Early childhood education Kindergarten School types Environmental education Outdoor education
Forest kindergarten
Environmental_science
1,548
34,211,524
https://en.wikipedia.org/wiki/Cuphophyllus%20virgineus
Cuphophyllus virgineus is a species of agaric (gilled mushroom) in the family Hygrophoraceae. Its recommended English common name is snowy waxcap in the UK. The species has a largely north temperate distribution, occurring in grassland in Europe and in woodland in North America and northern Asia, but is also known from Australia. It typically produces basidiocarps (fruit bodies) in the autumn. Taxonomy The species was first described in 1781 by the Austrian mycologist Franz Xaver von Wulfen as Agaricus virgineus. It was subsequently combined in a number of different genera, being transferred to Hygrocybe in 1969 before being transferred to Cuphophyllus. The specific epithet comes from Latin "virgineus" (= pure white). Hygrocybe nivea, first described by the Italian mycologist and naturalist Giovanni Antonio Scopoli in 1772, was sometimes distinguished by producing smaller and more slender fruit bodies than H. virginea, but is now regarded as a synonym. Molecular research published in 2011, based on cladistic analysis of DNA sequences found that Hygrocybe virginea does not belong in Hygrocybe sensu stricto and belongs in the genus Cuphophyllus instead. Description Basidiocarps are agaricoid, up to 75 mm (3 in) tall, the cap convex at first, becoming flat or slightly depressed when expanded, up to 75 mm (3 in) across. The cap surface is smooth, waxy when damp, hygrophanous and somewhat translucent with a striate margin, white to ivory (rarely with ochre to brownish tints). The lamellae (gills) are waxy, cap-coloured, and decurrent (widely attached to and running down the stipe). The stipe (stem) is smooth, cylindrical or tapering to the base, cap-coloured, and waxy when damp. The spore print is white, the spores (under a microscope) smooth, inamyloid, ellipsoid, about 7.0 to 8.5 by 4.5 to 5.0 μm. The taste is bitter to acrid. The species is sometimes parasitized by the mould Marquandomyces marquandi, which colours the lamellae violet. Similar species Cuphophyllus russocoriaceus is very similar in appearance, but can be distinguished in the field by its strong smell of sandalwood. Cuphophyllus berkeleyi is also similar, but fruit bodies are typically larger and non-hygrophanous (it has sometimes been considered a white form of Cuphophyllus pratensis). Distribution and habitat The snowy waxcap is widespread throughout the north temperate zone, occurring in Europe, North America, and northern Asia, and has also been recorded from Australia. Like other waxcaps, it grows in old, unimproved, short-sward grassland (pastures and lawns) in Europe, but in woodland elsewhere. Recent research suggests waxcaps are neither mycorrhizal nor saprotrophic but may be associated with mosses. Conservation In Europe, Cuphophyllus virgineus is typical of waxcap grasslands, a declining habitat due to changing agricultural practices. Cuphophyllus virgineus is one of the commonest species in the genus and is not considered to be of conservation concern (unlike most other waxcaps). In 1997, the species was featured on a postage stamp issued by the Faeroe Islands. Edibility Fruit bodies are considered edible and good. References Hygrophoraceae Edible fungi Fungi of Asia Fungi of Australia Fungi of Europe Fungi of North America Fungi described in 1781 Fungus species
Cuphophyllus virgineus
Biology
779
35,313,070
https://en.wikipedia.org/wiki/Richard%20Handl
Richard Handl (born May 23, 1980) is a Swedish man who experimented with tritium, americium, aluminium, beryllium, thorium, radium, and uranium, with the intention to create a nuclear reaction. He acquired most of the radioactive materials from foreign companies, while assembling a collection of periodic elements. For six months in 2011, he allegedly attempted to build a breeder reactor in his apartment in Ängelholm, Sweden. Background Handl became unemployed after working in a factory for four years, and decided to start a collection of the elements in the periodic table. Out of curiosity Handl began experimenting with the elements in his collection, to see if he could create a nuclear reaction. Handl's experiments included the acquisition of fissile material from outside the country, a radiator suitable for transmutation, and instruments to measure the reaction, including a Geiger counter. He spent about 5,000~6,000 kronor in materials and equipment. One stage of the process involved cooking americium, radium, and beryllium in on a stove, in order to more easily mix the ingredients; doing so resulted in an explosion. Handl kept a blog called "Richard's Reactor" in which he documented the progress of the reactor. Legal repercussions Handl was detained by the police on 22 July 2011, after having contacted the Swedish Radiation Safety Authority (SSM) to inquire as to whether his project was legal or not. His apartment was searched, and the radioactive materials as well as his computer were taken by the police. He was released, then convicted in July 2014 on the violation of the Radiation Safety Act, and the violation of Swedish Environmental Code. He was fined 13,600 kronor. See also David Hahn Taylor Wilson References Swedish criminals Nuclear accidents and incidents Place of birth missing (living people) Living people 1980 births Radioactively contaminated areas
Richard Handl
Chemistry,Technology
395
25,678,734
https://en.wikipedia.org/wiki/Kepler-6b
Kepler-6b is an extrasolar planet in the orbit of the unusually metal-rich Kepler-6, a star in the field of view of the NASA-operated Kepler spacecraft, which searches for planets that cross directly in front of, or transit, their host stars. It was the third planet to be discovered by Kepler. Kepler-6 orbits its host star every three days from a distance of .046 AU. Its proximity to Kepler-6 inflated the planet, about two-thirds the mass of Jupiter, to slightly larger than Jupiter's size and greatly heated its atmosphere. Follow-up observations led to the planet's confirmation, which was announced at a meeting of the American Astronomical Society on January 4, 2010 along with four other Kepler-discovered planets. Discovery and naming NASA's Kepler satellite trails the Earth and continually observes a portion of the sky between the constellations Cygnus and Lyra. It is devised to search for and discover planets that transit, or cross in front of, their host stars with respect to Earth by measuring small and generally periodic variations in a star's brightness. Kepler recognized a potential transit event around a star that was designated KOI-017, which was named Kepler-6 after the confirmation of Kepler-6b. The star was designated "6" because it was the sixth planet to be observed (but the third planet to be discovered) by the Kepler satellite. After the initial detection of a transit signal by Kepler, follow-up observations were taken to confirm the planetary nature of the candidate. Speckle imaging by the WIYN Telescope was used to determine the amount of light from nearby, background stars that was present. If not accounted for, this light would have made Kepler-6 appear brighter than it actually was. Consequently, the size of Kepler-6b would have been underestimated. Radial velocity data was taken by HIRES at the Keck I telescope in order to determine the mass of the planet. Independently, observations were made with the Spitzer Space Telescope at infrared wavelengths of 3.6 and 4.5 micrometres. Along with additional data taken by Kepler, these observations detected the occultation and phase curves of Kepler-6b behind its star. The confirmation of Kepler-6b was announced at the 215th meeting of the American Astronomical Society with the discoveries of planets Kepler-4b, Kepler-5b, Kepler-7b, and Kepler-8b on January 4, 2010. Host star Kepler-6 is a sunlike star in the Cygnus constellation. It is approximately 20.9% more massive than and 39.1% larger than the Sun. With an effective temperature of 5647 K, Kepler-6 is cooler than the Sun. It is predicted to be 3.8 billion years old, compared to the Sun's age of 4.6 billion years. It is most notable for its unusually high metallicity for an exoplanet-bearing star; with an [Fe/H] = 0.34, Kepler-6 has 2.18 times more iron than the Sun does. Kepler-6b is the only planet that has been discovered in the orbit of Kepler-6. Characteristics Kepler-6b is a hot Jupiter, having a mass 0.669 times that of Jupiter, but an average distance of only 0.046 AU from its star and, thus, an orbital period of 3.23 days. It is almost 10 times closer to its star than Mercury is from the Sun. As a result, Kepler-6b is strongly irradiated by its star, heating its atmosphere to a temperature of 1660 K and puffing it up to a size 1.3 times that of Jupiter. It may also be the case that Kepler-6b has a thermal inversion of its atmosphere, where temperature increases with increasing distance from the center of the planet. The planet is likely to be tidally locked to the parent star. In 2015, the planetary nightside temperature was estimated to be equal to 1719 K. References External links Exoplanets with Kepler designations Exoplanets discovered in 2010 Hot Jupiters Giant planets Transiting exoplanets Cygnus (constellation) 6b
Kepler-6b
Astronomy
855
2,329,092
https://en.wikipedia.org/wiki/Noncovalent%20solid-phase%20organic%20synthesis
Noncovalent solid-phase organic synthesis (NC-SPOS) is a form of solid-phase synthesis whereby the organic substrate is bonded to the solid phase not by a covalent bond but by other chemical interactions. Synthesis This bond may consist of an induced dipole interaction between a hydrophobic matrix and a hydrophobic anchor. As long as the reaction medium is hydrophilic (polar) in nature the anchor will remain on the solid phase. Switching to a nonpolar solvent releases the organic substrate containing the anchor. In one experimental setup the hydrophobic matrix is RP silica gel (C18) and the anchor is acridone. Acridone is N-alkylated and the terminal alkene group is converted into an aldehyde by ozonolysis. This compound is bonded to RP silica gel and this system is subjected to a tandem sequence of organic reactions. The first reaction is a Barbier reaction with propargylic bromide in water (green chemistry) and the second reaction is a Sonogashira coupling. Substrates may vary in these sequences and in this way a chemical library of new compounds can be realized. References Solid-phase synthesis
Noncovalent solid-phase organic synthesis
Chemistry
242
45,662,064
https://en.wikipedia.org/wiki/Electrical%20aerosol%20spectrometer
Electrical aerosol spectrometry (EAS) is a technique for measurement of the number-size distribution of aerosol using a combination of electrical charging and multiple solid state electrometer detectors. The technique combines both diffusion and field charging regimes to cover the diameter range 10 nm to 10 μm. Subsequent developments of the technique enable measurements faster than 1 Hz, although in each case with a reduced size range. Aerosol charging High charging efficiency allows sufficient charge to be placed on individual particles that the use of electrometer detectors is practicable, while the use of parallel electrometer detectors allows real time measurement of the size/number spectrum with output data as fast as 0.25 Hz. Unlike SMPS-type devices, multiple charging is an inherent issue across almost the entire size range of EAS-type devices. Accurate characterization of the electrical charging of the aerosol is therefore an essential component of device design. Calibration Techniques for the traceable calibration of such devices are established, and result in good agreement (subject to suitable signal levels) with slower but more sensitive scanning mobility particle sizers. Applications Applications include the measurement of engine exhaust, cigarette smoke, and ambient/atmospheric studies. The technique is particularly appropriate for situations where aerosol concentrations are changing on a timescale of 1 s or faster. References Spectrometers Aerosols Aerosol measurement
Electrical aerosol spectrometer
Physics,Chemistry
278
11,384,086
https://en.wikipedia.org/wiki/Spin%E2%80%93spin%20relaxation
In physics, the spin–spin relaxation is the mechanism by which , the transverse component of the magnetization vector, exponentially decays towards its equilibrium value in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). It is characterized by the spin–spin relaxation time, known as 2, a time constant characterizing the signal decay. It is named in contrast to 1, the spin–lattice relaxation time. It is the time it takes for the magnetic resonance signal to irreversibly decay to 37% (1/e) of its initial value after its generation by tipping the longitudinal magnetization towards the magnetic transverse plane. Hence the relation . 2 relaxation generally proceeds more rapidly than 1 recovery, and different samples and different biological tissues have different 2. For example, fluids have the longest 2, and water based tissues are in the 40–200 ms range, while fat based tissues are in the 10–100 ms range. Amorphous solids have 2 in the range of milliseconds, while the transverse magnetization of crystalline samples decays in around 1/20 ms. Origin When excited nuclear spins—i.e., those lying partially in the transverse plane—interact with each other by sampling local magnetic field inhomogeneities on the micro- and nanoscales, their respective accumulated phases deviate from expected values. While the slow- or non-varying component of this deviation is reversible, some net signal will inevitably be lost due to short-lived interactions such as collisions and random processes such as diffusion through heterogeneous space. T2 decay does not occur due to the tilting of the magnetization vector away from the transverse plane. Rather, it is observed due to the interactions of an ensemble of spins dephasing from each other. Unlike spin-lattice relaxation, considering spin-spin relaxation using only a single isochromat is trivial and not informative. Determining parameters Like spin-lattice relaxation, spin-spin relaxation can be studied using a molecular tumbling autocorrelation framework. The resulting signal decays exponentially as the echo time (TE), i.e., the time after excitation at which readout occurs, increases. In more complicated experiments, multiple echoes can be acquired simultaneously in order to quantitatively evaluate one or more superimposed T2 decay curves. The relaxation rate experienced by a spin, which is the inverse of T2, is proportional to a spin's tumbling energy at the frequency difference between one spin and another; in less mathematical terms, energy is transferred between two spins when they rotate at a similar frequency to their beat frequency, in the figure at right. In that the beat frequency range is very small relative to the average rotation rate , spin-spin relaxation is not heavily dependent on magnetic field strength. This directly contrasts with spin-lattice relaxation, which occurs at tumbling frequencies equal to the Larmor frequency . Some frequency shifts, such as the NMR chemical shift, occur at frequencies proportional to the Larmor frequency, and the related but distinct parameter T2* can be heavily dependent on field strength due to the difficulty of correcting for inhomogeneity in stronger magnet bores. Assuming isothermal conditions, spins tumbling faster through space will generally have a longer T2. Since slower tumbling displaces the spectral energy at high tumbling frequencies to lower frequencies, the relatively low beat frequency will experience a monotonically increasing amount of energy as increases, decreasing relaxation time. The figure at the left illustrates this relationship. Fast tumbling spins, such as those in pure water, have similar T1 and T2 relaxation times, while slow tumbling spins, such as those in crystal lattices, have very distinct relaxation times. Measurement A spin echo experiment can be used to reverse time-invariant dephasing phenomena such as millimeter-scale magnetic inhomogeneities. The resulting signal decays exponentially as the echo time (TE), i.e., the time after excitation at which readout occurs, increases. In more complicated experiments, multiple echoes can be acquired simultaneously in order to quantitatively evaluate one or more superimposed T2 decay curves. In MRI, T2-weighted images can be obtained by selecting an echo time on the order of the various tissues' T2s. In order to reduce the amount of T1 information and therefore contamination in the image, excited spins are allowed to return to near-equilibrium on a T1 scale before being excited again. (In MRI parlance, this waiting time is called the "repetition time" and is abbreviated TR). Pulse sequences other than the conventional spin echo can also be used to measure T2; gradient echo sequences such as steady-state free precession (SSFP) and multiple spin echo sequences can be used to accelerate image acquisition or inform on additional parameters. See also Relaxation (NMR) Spin–lattice relaxation Spin echo References McRobbie D., et al. MRI, From picture to proton. 2003 Hashemi Ray, et al. MRI, The Basics 2ED. 2004. Magnetic resonance imaging Nuclear magnetic resonance Articles containing video clips
Spin–spin relaxation
Physics,Chemistry
1,045
32,069,686
https://en.wikipedia.org/wiki/Elsevier%20Biobase
Elsevier BIOBASE is a bibliographic database covering all topics pertaining to biological research throughout the world. It was established in the 1950s in print format as Current Awareness in Biological Sciences. Temporal coverage is from 1994 to the present. The database has over 4.1 million records as of December 2008. More than 300,000 records are added annually and 84% contain an abstract. It is updated weekly. Coverage Coverage of the biological sciences is derived from 1,900 journals. Subjects are indexed by titles, authors, abstracts, bibliographic details and authors' addresses. This database covers the following disciplines: Access points Access points on the internet are DataStar, DIALOG, DIMDI, and STN. Former titles This database continues: International Abstracts of Biological Sciences () It also continues in part: Current Advances in Neuroscience () Current Advances in Cell & Developmental Biology () References External links Biological databases Microbiology literature Biotechnology databases Ecological data Environmental science databases Internet properties established in 1994 1954 establishments
Elsevier Biobase
Biology,Environmental_science
202
12,122,459
https://en.wikipedia.org/wiki/C2H3ClO
{{DISPLAYTITLE:C2H3ClO}} The molecular formula C2H3ClO (molar mass: 78.50 g/mol, exact mass: 77.9872 u) may refer to: Acetyl chloride Chloroacetaldehyde Chloroethylene oxide
C2H3ClO
Chemistry
67
1,786,925
https://en.wikipedia.org/wiki/Acid%20value
In chemistry, acid value (AV, acid number, neutralization number or acidity) is a number used to quantify the acidity of a given chemical substance. It is the quantity of base (usually potassium hydroxide (KOH)), expressed as milligrams of KOH required to neutralize the acidic constituents in 1 gram of a sample. The acid value measures the acidity of water-insoluble substances like oils, fats, waxes and resins, which do not have a pH value. The acid number is a measure of the number of carboxylic acid groups () in a chemical compound, such as a fatty acid, or in a mixture of compounds. In other words, it is a measure of free fatty acids (FFAs) present in a substance. In a typical procedure, a known amount of sample dissolved in an organic solvent (often isopropanol) and titrated with a solution of alcoholic potassium hydroxide (KOH) of known concentration using phenolphthalein as a colour indicator. The acid number for an oil sample is indicative of the age of the oil and can be used to determine when the oil must be changed. A liquid fat sample combined with neutralized 95% ethanol is titrated with standardized sodium hydroxide of 0.1 eq/L normality to a phenolphthalein endpoint. The volume and normality of the sodium hydroxide are used, along with the weight of the sample, to calculate the free fatty acid value. Acid value is usually measured as milligrams of KOH per gram of sample (mg KOH/g fat/oil), or grams of KOH per gram of sample (g KOH/g fat/oil). Calculations For example, for analysis of crude oil: Where KOH is the titrant, wherease crude oil is the titrand. is the volume of titrant (ml) consumed by the crude oil sample and 1 ml of spiking solution at the equivalent point, is the volume of titrant (ml) consumed by 1 ml of spiking solution at the equivalent point, 56.1 g/mol is the molecular weight of KOH, is the mass of the sample in grams. The normality (N) of titrant is calculated as: Where is the mass (g) of potassium hydrogen phthalate (KHP) in 50 ml of KHP standard solution, is the volume of titrant (ml) consumed by 50 ml KHP standard solution at the equivalent point, 204.23 g/mol is the molecular weight of KHP. Applications An increment in the amount of FFAs in a fat or oil sample indicates hydrolysis of triglycerides. Such reaction occurs by the action of lipase enzyme and it is an indicator of inadequate processing and storage conditions. The source of the enzyme can be the tissue from which the oil or fat was extracted or it can be a contaminant from other cells including microorganisms. For determining the acid value of mineral oils and biodiesel, there are standard methods such as ASTM D 974 and DIN 51558, and especially for biodiesel the European Standard EN 14104 and ASTM D664 are both widely used worldwide. Acid value of biodiesel should be lower than 0.50 mg KOH/g in both EN 14214 and ASTM D6751 standard fuels. This is because the FFAs produced can corrode automotive parts, hence these limits protect vehicle engines and fuel tanks. Low acid value indicates good cleansing by soap. When oils and fats become rancid, triglycerides are converted into fatty acids and glycerol, causing an increase in acid value. A similar situation is observed during aging of biodiesel through analogous oxidation and when subjected to prolonged high temperatures (ester thermolysis) or through exposure to acids or bases (acid/base ester hydrolysis). Transesterification of waste cooking oil, having high acid value and high water content, can be performed using heteropolyacids such as dodecatungstophosphoric acid (PW12) as a catalyst. In 2007, Sahoo et al. made biodiesel consisting of mono-esters of polanga oil extract of the plant Calophyllum inophyllum produced by triple stage transesterification and blended with high speed diesel, which was then tested for its use as a diesel substitute in a single cylinder diesel engine. Testing Total acidity, fatty acid profiles, and free fatty acids (FFAs) can be determined for oils such as sunflower and soybean oils obtained by green processes involving supercritical carbon dioxide (scCO2) and pressurized liquid extraction (PLE). The identification and separation of the primary fatty acids responsible for acidity can ensure higher quality of fat and oil products. In 2020, Dallas Group of America (DGA) and American Oil Chemists' Society (AOCS) devised a standard method (5a-40) for testing free fatty acid in cooking oils. The DGA has produced a simplified test kit based on the 5a-40 test method. Acid values of various fats and oils See also References Analytical chemistry Lipids Food analysis Edible oil chemistry
Acid value
Chemistry
1,104
75,168,010
https://en.wikipedia.org/wiki/PSR%20J1747%E2%88%922958
PSR J1747-2958 is a young, weak, nonthermal radio pulsar with a rotation period of 98.8 ms (milliseconds). The pulsar moves at a supersonic speed through the interstellar medium forming an unusual nonthermal nebula around it. This nebula around PSR J1747-2958 is also called the "Mouse nebula" or "G359.23-0.82" and it is a axisymmetric nebula. Discovery This object was discovered in February 7, 2008, with a 58 ks exposure. Reference Pulsars Astronomical objects discovered in 2008 Sagittarius (constellation)
PSR J1747−2958
Astronomy
135
1,719,250
https://en.wikipedia.org/wiki/Insulin%20glargine
Insulin glargine sold under the brand name Lantus among others is a long-acting modified form of medical insulin, used in the management of type 1 and type 2 diabetes. It is injected just under the skin. Effects generally begin an hour after use. Common side effects include low blood sugar, problems at the site of injection, itchiness, and weight gain. Other serious side effects include low blood potassium. NPH insulin rather than insulin glargine is generally preferred in pregnancy. After injection, microcrystals slowly release insulin for about 24 hours. This insulin causes body tissues to absorb glucose from the blood and decreases glucose production by the liver. Insulin glargine was patented, but the patent expired in most jurisdictions in 2014. It was approved for medical use in the United States in 2000. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 28th most commonly prescribed medication in the United States, with more than 18million prescriptions. In July 2021, the US Food and Drug Administration (FDA) approved an interchangeable biosimilar insulin product called Semglee (insulin glargine-yfgn) for the treatment of diabetes. Medical uses The long-acting insulin class, which includes insulin glargine, do not appear much better than neutral protamine Hagedorn (NPH) insulin, but do have a greater cost, making them, as of 2010, not cost effective for the treatment of type 2 diabetes. In a previous review it was unclear if there is a difference in hypoglycemia, as there was not enough data to determine any differences with respect to long term outcomes, however a more recent Cochrane systematic review did not find clinically significant difference when comparing insulin glargine to NPH insulin, insulin detemir or insulin degludec in the management of type 1 diabetes in either adults or children over periods of 6 months or longer. It is not typically the recommended long-acting insulin in the United Kingdom. Semglee is indicated to improve glycemic control in adults and children with type 1 diabetes and in adults with type 2 diabetes. Semglee is both biosimilar to, and interchangeable with its reference product Lantus (insulin glargine), a long-acting insulin analog. Mixing with other insulins The American Diabetes Association said in 2003 that, unlike some other longer-acting insulins, glargine should not be diluted or mixed with other insulin or solution in the same syringe, due to the low pH of its diluent. However, a 2004 study found that mixing glargine with other insulins did not affect short-term glycemic profile. Adverse effects Common side effects include low blood sugar, problems at the site of injection, itchiness, and weight gain. Serious side effects include low blood potassium. As of 2012, tentative evidence shows no association between insulin glargine and cancer. Previous studies had raised concerns. When comparing insulin glargine to NPH insulin, insulin detemir or insulin degludec, no significant adverse effects were found in the management of type 1 diabetes in either adults or children in periods of six months or longer. Pharmacology Mechanism of action Insulin glargine differs from human insulin by replacing asparagine with glycine in position 21 of the A-chain and by carboxy-terminal extension of B-chain by 2 arginine residues. The arginine amino acids shift the isoelectric point from a pH of 5.4 to 6.7, making the molecule more soluble at an acidic pH and less soluble at physiological pH. The isoelectric shift also allows for the subcutaneous injection of a clear solution. The glycine substitution prevents deamidation of the acid-sensitive asparagine at acidic pH. In the neutral subcutaneous space, higher-order aggregates form, resulting in a slow, peakless dissolution and absorption of insulin from the site of injection. History In June 2000, the European Commission formally approved the launching of Lantus by Sanofi-Aventis Germany in the European Union. The admission was prolonged on 9 June 2005. A three-fold more concentrated formulation, brand name Toujeo, was introduced after FDA approval in 2015. Legal status Biosimilars Abasaglar was approved for medical use in the European Union in September 2014. Lusduna was approved for medical use in the European Union in January 2017. In March 2018, insulin glargine (Semglee) was approved for medical use in the European Union. In July 2021, insulin glargine-yfgn (Semglee) was approved for medical use in the United States as the first interchangeable biosimilar of Lantus. The FDA granted approval of Semglee to Mylan Pharmaceuticals. Patent expiry Patent protection for insulin glargine expired in Europe and the US in 2014. Insulin glargine from competitor Eli Lilly became available in most countries during 2015, under the brand names Basaglar (as a follow-on in the US) and Abasaglar (as a biosimilar in the EU). Brand names Insulin glargine is available under brand names including Basaglar, Lantus, and Toujeo. References Insulin receptor agonists Human proteins Recombinant proteins Peptide hormones Peptide therapeutics Wikipedia medicine articles ready to translate Drugs developed by Eli Lilly and Company Drugs developed by Boehringer Ingelheim Sanofi World Health Organization essential medicines
Insulin glargine
Biology
1,163
1,983,101
https://en.wikipedia.org/wiki/Partition%20%28database%29
A partition is a division of a logical database or its constituent elements into distinct independent parts. Database partitioning refers to intentionally breaking a large database into smaller ones for scalability purposes, distinct from network partitions which are a type of network fault between nodes. In a partitioned database, each piece of data belongs to exactly one partition, effectively making each partition a small database of its own. Database partitioning is normally done for manageability, performance or availability reasons, or for load balancing. It is popular in distributed database management systems, where each partition may be spread over multiple nodes, with users at the node performing local transactions on the partition. This increases performance for sites that have regular transactions involving certain views of data, whilst maintaining availability and security. Partitioning enables distribution of datasets across multiple disks and query loads across multiple processors. For queries that operate on a single partition, each node executes queries independently on its local partition, enabling linear scaling of query throughput with additional nodes. More complex queries can be parallelized across multiple nodes, though this presents additional challenges. History Database partitioning emerged in the 1980s with systems like Teradata and NonStop SQL. The approach was later adopted by NoSQL databases and Hadoop-based data warehouses. While implementations vary between transactional and analytical workloads, the core principles of partitioning remain consistent across both use cases. Terminology Different databases use varying terminology for partitioning: Shard in MongoDB, Elasticsearch, and SolrCloud Region in HBase Tablet in Bigtable vnode in Cassandra and Riak vBucket in Couchbase Partitioning and Replication Partitioning is commonly implemented alongside replication, storing partition copies across multiple nodes. Each record belongs to one partition but may exist on multiple nodes for fault tolerance. In leader-follower replication systems, nodes can simultaneously serve as leaders for some partitions and followers for others. Load Balancing and Hot Spots Partitioning aims to distribute data and query load evenly across nodes. With ideal distribution, system capacity scales linearly with added nodes—ten nodes should process ten times the data and throughput of a single node. Uneven distribution, termed skew, reduces partitioning efficiency. Partitions with excessive load are called hot spots. Several strategies address hot spots: Random record assignment to nodes, at the cost of retrieval complexity Key-range partitioning with optimized boundaries Hash-based partitioning for even load distribution Partitioning criteria Current high-end relational database management systems provide for different criteria to split the database. They take a partitioning key and assign a partition based on certain criteria. Some common criteria include: Range partitioning: assigns continuous key ranges to partitions, analogous to encyclopedia volumes. Known range boundaries enable direct request routing. Boundaries can be set manually or automatically for balanced distribution. While this enables efficient range scans, certain access patterns create hot spots. For instance, in sensor networks using timestamp keys, writes concentrate in the current time period's partition. Using compound keys—such as prefixing timestamps with sensor identifiers—can distribute this load. An example could be a partition for all rows where the "zipcode" column has a value between 70000 and 79999. List partitioning: a partition is assigned a list of values. If the partitioning key has one of these values, the partition is chosen. For example, all rows where the column Country is either Iceland, Norway, Sweden, Finland or Denmark could build a partition for the Nordic countries. Composite partitioning: allows for certain combinations of the above partitioning schemes, by for example first applying a range partitioning and then a hash partitioning. Consistent hashing could be considered a composite of hash and list partitioning where the hash reduces the key space to a size that can be listed. Round-robin partitioning: the simplest strategy, it ensures uniform data distribution. With n partitions, the ith tuple in insertion order is assigned to partition (i mod n). This strategy enables the sequential access to a relation to be done in parallel. However, the direct access to individual tuples, based on a predicate, requires accessing the entire relation. Hash partitioning: applies a hash function to convert skewed data into uniform distributions for even load distribution across partitions. While this effectively prevents hot spots, it sacrifices range query efficiency as adjacent keys scatter across partitions. Common implementations include MD5 in Cassandra and MongoDB. Some systems, like Cassandra, combine approaches using compound primary keys: hashing the first component for partitioning while maintaining sort order for remaining components within partitions. In any partitioning scheme, data is typically arranged so that each piece of data (record, row, or document) belongs to exactly one partition. While some databases support operations that span multiple partitions, this single-partition association is fundamental to the partitioning concept. Partitioning methods The partitioning can be done by either building separate smaller databases (each with its own tables, indices, and transaction logs), or by splitting selected elements, for example just one table. Horizontal partitioning Horizontal partitioning involves putting different rows into different tables. For example, customers with ZIP codes less than 50000 are stored in CustomersEast, while customers with ZIP codes greater than or equal to 50000 are stored in CustomersWest. The two partition tables are then CustomersEast and CustomersWest, while a view with a union might be created over both of them to provide a complete view of all customers. Vertical partitioning Vertical partitioning involves creating tables with fewer columns and using additional tables to store the remaining columns. Generally, this practice is known as normalization. However, vertical partitioning extends further, and partitions columns even when already normalized. This type of partitioning is also called "row splitting", since rows get split by their columns, and might be performed explicitly or implicitly. Distinct physical machines might be used to realize vertical partitioning: storing infrequently used or very wide columns, taking up a significant amount of memory, on a different machine, for example, is a method of vertical partitioning. A common form of vertical partitioning is to split static data from dynamic data, since the former is faster to access than the latter, particularly for a table where the dynamic data is not used as often as the static. Creating a view across the two newly created tables restores the original table with a performance penalty, but accessing the static data alone will show higher performance. A columnar database can be regarded as a database that has been vertically partitioned until each column is stored in its own table. See also Block Range Index CAP theorem Data striping in RAIDs References Database management systems
Partition (database)
Engineering
1,366
113,604
https://en.wikipedia.org/wiki/Broadcasting
Broadcasting is the distribution of audio or video content to a dispersed audience via any electronic mass communications medium, but typically one using the electromagnetic spectrum (radio waves), in a one-to-many model. Broadcasting began with AM radio, which came into popular use around 1920 with the spread of vacuum tube radio transmitters and receivers. Before this, most implementations of electronic communication (early radio, telephone, and telegraph) were one-to-one, with the message intended for a single recipient. The term broadcasting evolved from its use as the agricultural method of sowing seeds in a field by casting them broadly about. It was later adopted for describing the widespread distribution of information by printed materials or by telegraph. Examples applying it to "one-to-many" radio transmissions of an individual station to multiple listeners appeared as early as 1898. Over-the-air broadcasting is usually associated with radio and television, though more recently, both radio and television transmissions have begun to be distributed by cable (cable television). The receiving parties may include the general public or a relatively small subset; the point is that anyone with the appropriate receiving technology and equipment (e.g., a radio or television set) can receive the signal. The field of broadcasting includes both government-managed services such as public radio, community radio and public television, and private commercial radio and commercial television. The U.S. Code of Federal Regulations, title 47, part 97 defines broadcasting as "transmissions intended for reception by the general public, either direct or relayed". Private or two-way telecommunications transmissions do not qualify under this definition. For example, amateur ("ham") and citizens band (CB) radio operators are not allowed to broadcast. As defined, transmitting and broadcasting are not the same. Transmission of radio and television programs from a radio or television station to home receivers by radio waves is referred to as over the air (OTA) or terrestrial broadcasting and in most countries requires a broadcasting license. Transmissions using a wire or cable, like cable television (which also retransmits OTA stations with their consent), are also considered broadcasts but do not necessarily require a license (though in some countries, a license is required). In the 2000s, transmissions of television and radio programs via streaming digital technology have increasingly been referred to as broadcasting as well. History In 1894, Italian inventor Guglielmo Marconi began developing a wireless communication using the then-newly discovered phenomenon of radio waves, showing by 1901 that they could be transmitted across the Atlantic Ocean. This was the start of wireless telegraphy by radio. Audio radio broadcasting began experimentally in the first decade of the 20th century. On 17 December 1902, a transmission from the Marconi station in Glace Bay, Nova Scotia, Canada, became the world's first radio message to cross the Atlantic from North America. In 1904, a commercial service was established to transmit nightly news summaries to subscribing ships, which incorporated them into their onboard newspapers. World War I accelerated the development of radio for military communications. After the war, commercial radio AM broadcasting began in the 1920s and became an important mass medium for entertainment and news. World War II again accelerated the development of radio for the wartime purposes of aircraft and land communication, radio navigation, and radar. Development of stereo FM broadcasting of radio began in the 1930s in the United States and the 1970s in the United Kingdom, displacing AM as the dominant commercial standard. On 25 March 1925, John Logie Baird demonstrated the transmission of moving pictures at the London department store Selfridges. Baird's device relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning on 30 September 1929. However, for most of the 20th century, televisions depended on the cathode-ray tube invented by Karl Braun. The first version of such a television to show promise was produced by Philo Farnsworth and demonstrated to his family on 7 September 1927. After World War II, interrupted experiments resumed and television became an important home entertainment broadcast medium, using VHF and UHF spectrum. Satellite broadcasting was initiated in the 1960s and moved into general industry usage in the 1970s, with DBS (Direct Broadcast Satellites) emerging in the 1980s. Originally, all broadcasting was composed of analog signals using analog transmission techniques but in the 2000s, broadcasters switched to digital signals using digital transmission. An analog signal is any continuous signal representing some other quantity, i.e., analogous to another quantity. For example, in an analog audio signal, the instantaneous signal voltage varies continuously with the pressure of the sound waves. In contrast, a digital signal represents the original time-varying quantity as a sampled sequence of quantized values which imposes some bandwidth and dynamic range constraints on the representation. In general usage, broadcasting most frequently refers to the transmission of information and entertainment programming from various sources to the general public: Analog audio radio (AM, FM) vs. digital audio radio (HD radio), digital audio broadcasting (DAB), satellite radio and digital Radio Mondiale (DRM) Analog television vs. digital television Wireless The world's technological capacity to receive information through one-way broadcast networks more than quadrupled during the two decades from 1986 to 2007, from 432 exabytes of (optimally compressed) information, to 1.9 zettabytes. This is the information equivalent of 55 newspapers per person per day in 1986, and 175 newspapers per person per day by 2007. Methods In a broadcast system, the central high-powered broadcast tower transmits a high-frequency electromagnetic wave to numerous receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analog (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values). Historically, there have been several methods used for broadcasting electronic media audio and video to the general public: Telephone broadcasting (1881–1932): the earliest form of electronic broadcasting (not counting data services offered by stock telegraph companies from 1867, if ticker-tapes are excluded from the definition). Telephone broadcasting began with the advent of Théâtrophone ("Theatre Phone") systems, which were telephone-based distribution systems allowing subscribers to listen to live opera and theatre performances over telephone lines, created by French inventor Clément Ader in 1881. Telephone broadcasting also grew to include telephone newspaper services for news and entertainment programming which were introduced in the 1890s, primarily located in large European cities. These telephone-based subscription services were the first examples of electrical/electronic broadcasting and offered a wide variety of programming. Radio broadcasting (experimentally from 1906, commercially from 1920); audio signals sent through the air as radio waves from a transmitter, picked up by an antenna and sent to a receiver. Radio stations can be linked in radio networks to broadcast common radio programs, either in broadcast syndication, simulcast or subchannels. Television broadcasting (telecast), experimentally from 1925, commercially from the 1930s: an extension of radio to include video signals. Cable radio (also called cable FM, from 1928) and cable television (from 1932): both via coaxial cable, originally serving principally as transmission media for programming produced at either radio or television stations, but later expanding into a broad universe of cable-originated channels. Direct-broadcast satellite (DBS) (from ) and satellite radio (from ): meant for direct-to-home broadcast programming (as opposed to studio network uplinks and down-links), provides a mix of traditional radio or television broadcast programming, or both, with dedicated satellite radio programming. (See also: Satellite television) Webcasting of video/television (from ) and audio/radio (from ) streams: offers a mix of traditional radio and television station broadcast programming with dedicated Internet radio and Internet television. Economic models There are several means of providing financial support for continuous broadcasting: Commercial broadcasting: for-profit, usually privately owned stations, channels, networks, or services providing programming to the public, supported by the sale of air time to advertisers for radio or television advertisements during or in breaks between programs, often in combination with cable or pay cable subscription fees. Public broadcasting: usually non-profit, publicly owned stations or networks supported by license fees, government funds, grants from foundations, corporate underwriting, audience memberships, contributions or a combination of these. Community broadcasting: a form of mass media in which a television station, or a radio station, is owned, operated or programmed, by a community group to provide programs of local interest known as local programming. Community stations are most commonly operated by non-profit groups or cooperatives; however, in some cases they may be operated by a local college or university, a cable company or a municipal government. Internet Webcast: the audience pays to recharge and buy virtual gifts for the anchor, and the platform converts the gifts into virtual currency. The anchor withdraws the virtual currency, which is drawn by the platform. If the anchor belongs to a trade union, it will be settled by the trade union and the live broadcasting platform, and the anchor will get the salary and part of the bonus. This is the most common profit model of live broadcast products. Broadcasters may rely on a combination of these business models. For example, in the United States, National Public Radio (NPR) and the Public Broadcasting Service (PBS, television) supplement public membership subscriptions and grants with funding from the Corporation for Public Broadcasting (CPB), which is allocated bi-annually by Congress. US public broadcasting corporate and charitable grants are generally given in consideration of underwriting spots which differ from commercial advertisements in that they are governed by specific FCC restrictions, which prohibit the advocacy of a product or a "call to action". Recorded and live forms The first regular television broadcasts started in 1937. Broadcasts can be classified as recorded or live. The former allows correcting errors, and removing superfluous or undesired material, rearranging it, applying slow-motion and repetitions, and other techniques to enhance the program. However, some live events like sports television can include some of the aspects including slow-motion clips of important goals/hits, etc., in between the live television telecast. American radio-network broadcasters habitually forbade prerecorded broadcasts in the 1930s and 1940s, requiring radio programs played for the Eastern and Central time zones to be repeated three hours later for the Pacific time zone (See: Effects of time on North American broadcasting). This restriction was dropped for special occasions, as in the case of the German dirigible airship Hindenburg disaster at Lakehurst, New Jersey, in 1937. During World War II, prerecorded broadcasts from war correspondents were allowed on U.S. radio. In addition, American radio programs were recorded for playback by Armed Forces Radio radio stations around the world. A disadvantage of recording first is that the public may learn the outcome of an event before the recording is broadcast, which may be a spoiler. Prerecording may be used to prevent announcers from deviating from an officially approved script during a live radio broadcast, as occurred with propaganda broadcasts from Germany in the 1940s and with Radio Moscow in the 1980s. Many events are advertised as being live, although they are often recorded live (sometimes called "live-to-tape"). This is particularly true of performances of musical artists on radio when they visit for an in-studio concert performance. Similar situations have occurred in television production ("The Cosby Show is recorded in front of a live television studio audience") and news broadcasting. A broadcast may be distributed through several physical means. If coming directly from the radio studio at a single station or television station, it is sent through the studio/transmitter link to the transmitter and hence from the television antenna located on the radio masts and towers out to the world. Programming may also come through a communications satellite, played either live or recorded for later transmission. Networks of stations may simulcast the same programming at the same time, originally via microwave link, now usually by satellite. Distribution to stations or networks may also be through physical media, such as magnetic tape, compact disc (CD), DVD, and sometimes other formats. Usually these are included in another broadcast, such as when electronic news gathering (ENG) returns a story to the station for inclusion on a news programme. The final leg of broadcast distribution is how the signal gets to the listener or viewer. It may come over the air as with a radio station or television station to an antenna and radio receiver, or may come through cable television or cable radio (or wireless cable) via the station or directly from a network. The Internet may also bring either internet radio or streaming media television to the recipient, especially with multicasting allowing the signal and bandwidth to be shared. The term broadcast network is often used to distinguish networks that broadcast over-the-air television signals that can be received using a tuner inside a television set with a television antenna from so-called networks that are broadcast only via cable television (cablecast) or satellite television that uses a dish antenna. The term broadcast television can refer to the television programs of such networks. Social impact The sequencing of content in a broadcast is called a schedule. As with all technological endeavors, a number of technical terms and slang have developed. A list of these terms can be found at List of broadcasting terms. Television and radio programs are distributed through radio broadcasting or cable, often both simultaneously. By coding signals and having a cable converter box with decoding equipment in homes, the latter also enables subscription-based channels, pay-tv and pay-per-view services. In his essay, John Durham Peters wrote that communication is a tool used for dissemination. Peters stated, "Dissemination is a lens—sometimes a usefully distorting one—that helps us tackle basic issues such as interaction, presence, and space and time ... on the agenda of any future communication theory in general". Dissemination focuses on the message being relayed from one main source to one large audience without the exchange of dialogue in between. It is possible for the message to be changed or corrupted by government officials once the main source releases it. There is no way to predetermine how the larger population or audience will absorb the message. They can choose to listen, analyze, or ignore it. Dissemination in communication is widely used in the world of broadcasting. Broadcasting focuses on getting a message out and it is up to the general public to do what they wish with it. Peters also states that broadcasting is used to address an open-ended destination. There are many forms of broadcasting, but they all aim to distribute a signal that will reach the target audience. Broadcasters typically arrange audiences into entire assemblies. In terms of media broadcasting, a radio show can gather a large number of followers who tune in every day to specifically listen to that specific disc jockey. The disc jockey follows the script for their radio show and just talks into the microphone. They do not expect immediate feedback from any listeners. The message is broadcast across airwaves throughout the community, but the listeners cannot always respond immediately, especially since many radio shows are recorded prior to the actual air time. Conversely, receivers can select opt-in or opt-out of getting broadcast messages using an Excel file, offering them control over the information they receive Broadcast engineering Broadcast engineering is the field of electrical engineering, and now to some extent computer engineering and information technology, which deals with radio and television broadcasting. Audio engineering and RF engineering are also essential parts of broadcast engineering, being their own subsets of electrical engineering. Broadcast engineering involves both the studio and transmitter aspects (the entire airchain), as well as remote broadcasts. Every station has a broadcast engineer, though one may now serve an entire station group in a city. In small media markets the engineer may work on a contract basis for one or more stations as needed. See also Notes and references Bibliography Carey, James (1989), Communication as Culture, New York and London: Routledge, pp. 201–30 Kahn, Frank J., ed. Documents of American Broadcasting, fourth edition (Prentice-Hall, Inc., 1984). Lichty Lawrence W., and Topping Malachi C., eds, American Broadcasting: A Source Book on the History of Radio and Television (Hastings House, 1975). Meyrowitz, Joshua, Mediating Communication: What Happens? in Downing, J., Mohammadi, A., and Sreberny-Mohammadi, A. (eds), Questioning The Media (Thousand Oaks, CA: Sage 1995), pp. 39–53 Thompson, J., The Media and Modernity, in Mackay, H., and O'Sullivan, T. (eds), The Media Reader: Continuity and Transformation (London: Sage, 1999), pp. 12–27 International bibliography – History of wireless and radio broadcasting Further reading Barnouw Erik. The Golden Web (Oxford University Press, 1968); The Sponsor (1978); A Tower in Babel (1966). Covert Cathy, and Stevens John L. Mass Media Between the Wars (Syracuse University Press, 1984). Tim Crook; International Radio Journalism: History, Theory and Practice Routledge, 1998 John Dunning; On the Air: The Encyclopedia of Old-Time Radio Oxford University Press, 1998 Ewbank Henry and Lawton Sherman P. Broadcasting: Radio and Television (Harper & Brothers, 1952). Maclaurin W. Rupert. Invention and Innovation in the Radio Industry (The Macmillan Company, 1949). Robert W. McChesney; Telecommunications, Mass Media, and Democracy: The Battle for the Control of U.S. Broadcasting, 1928–1935 Oxford University Press, 1994 Gwenyth L. Jackaway; Media at War: Radio's Challenge to the Newspapers, 1924–1939 Praeger Publishers, 1995 Lazarsfeld Paul F. The People Look at Radio (University of North Carolina Press, 1946). Schramm Wilbur, ed. Mass Communications (University of Illinois Press, 1960). Schwoch James. The American Radio Industry and Its Latin American Activities, 1900–1939 (University of Illinois Press, 1990). Slater Robert. This ... is CBS: A Chronicle of 60 Years (Prentice Hall, 1988). Sterling Christopher H. Electronic Media, A Guide to Trends in Broadcasting and Newer Technologies 1920–1983 (Praeger, 1984). Sterling Christopher, and Kittross John M. Stay Tuned: A Concise History of American Broadcasting (Wadsworth, 1978). Wells, Alan, World Broadcasting: A Comparative View, Greenwood Publishing Group, 1996. External links Radio Locator, for American radio station with format, power, and coverage information. Jim Hawkins' Radio and Broadcast Technology Page – History of broadcast transmitter Indie Digital Cinema Services – Broadcast Industry Glossary Telecommunications
Broadcasting
Technology
3,899
61,232,022
https://en.wikipedia.org/wiki/C17H16ClN3O2
{{DISPLAYTITLE:C17H16ClN3O2}} The molecular formula C17H16ClN3O2 (molar mass: 329.781 g/mol) may refer to: Carburazepam 7-Hydroxyamoxapine Molecular formulas
C17H16ClN3O2
Physics,Chemistry
63
62,953,266
https://en.wikipedia.org/wiki/Chinese%20Society%20of%20Astronautics
The Chinese Society of Astronautics (; abbreviated CSA) is a professional association of individuals with an interest in space. As of 2019, the society has 38 specialized committees and 179 working committees with more than 30,000 individual members. History The initial concept of the Chinese Society of Astronautics was proposed in 1977 and accepted by the China Association for Science and Technology (CAST). The Chinese Society of Astronautics was founded by Qian Xuesen, Ren Xinmin and Zhang Zhenhuan on October 23, 1979. In September 1980 it became a member of the International Astronautical Federation (IAF). Scientific publishing Journal of Astronautics Advances in Aerospace Science and Technology Space Exploration List of presidents References External links Space advocacy organizations Scientific organizations established in 1979 Organizations based in Beijing 1979 establishments in China 1979 in Beijing
Chinese Society of Astronautics
Astronomy
165
20,856,932
https://en.wikipedia.org/wiki/Contingent%20value%20rights
In corporate finance, Contingent Value Rights (CVR) are rights granted by an acquirer to a company’s shareholders, facilitating the transaction where some uncertainty is inherent. CVRs may be separately tradeable securities; they are occasionally acquired (or shorted) by specialized hedge funds. Forms These rights typically take either of two forms: (1) Event-driven CVRs compensate the owners for yet to eventuate positive developments in their business - hence protecting the acquirer against the valuation risk inherent in overpaying. (2) Price-protection CVRs are granted when payment is share based - protecting the acquired company, by providing a hedge against downside price risk in the acquirer's equity. In the first case, CVRs are granted in scenarios in which the acquiring company does not wish to pay for a product that might not work, has a limited market, or might need significant investment; whereas on the other side, the acquired company “wants to get full value for its assets”. The CVR then “helps bridge this negotiation”. Under these rights, shareholders will receive additional cash, securities, or benefits if a specific and named event occurs - one where the value of the firm significantly increases - within a specified timeframe. CVRs are common in the biotech and pharmaceutical industries (see ); they are also often granted to shareholders in companies facing significant, value accretive restructuring. For an example see Media General / Nexstar Media Group. In the second case, protection against price risk is facilitated by specifying that payment will be made at an averaged, as opposed to final, share price; a floor may also be set. Valuation Under both, the CVR is in function, a form of option. The first case: analogous to a call option, the payout to the CVR holder will be triggered by the event occurring, and will be zero otherwise. To determine the value of these rights, analysts will apply a modified option pricing model based on the probability of the event, the time horizon specified, and the corresponding payout rules; see Contingent claim valuation, Real options valuation, and . The second: the CVR takes the form of a modified Asian option. See also Strip financing Hold-up problem Earnout References External links CVR on Investopedia CVR on Motley Fool Shadowy Shares: The Dark Side of Contingent Value Rights, Forbes.com (Michael Stocker, Iona Evan. 2011) Corporate finance Mergers and acquisitions Valuation (finance) Securities (finance) Real options Equity securities Venture capital
Contingent value rights
Engineering
515
18,599,326
https://en.wikipedia.org/wiki/Xylamidine
Xylamidine is a drug which acts as an antagonist of the serotonin 5-HT2A and 5-HT2C receptors, and to a lesser extent of the serotonin 5-HT1A receptor. The drug does not cross the blood–brain barrier and hence is peripherally selective, which makes it useful for blocking peripheral serotonergic responses like cardiovascular and gastrointestinal effects, without producing the central effects of 5-HT2A receptor blockade such as sedation, or interfering with the central actions of 5-HT2A receptor agonists. Xylamidine and analogues were patented for use in combination with serotonin 5-HT2A receptor agonists like serotonergic psychedelics in 2023. Chemistry Synthesis Xylamidine is an amidine. It is prepared by alkylation of 3-methoxyphenol (m-methoxyphenol) with α-chloropropionitrile, potassium iodide, and potassium carbonate in butanone to give #, which is in turn reduced with lithium aluminium hydride to give the primary amine #. When # is treated with m-tolylacetonitrile in the presence of anhydrous hydrochloric acid, the synthesis is completed. Alternately, one can react primary amine # with m-tolylacetamidine under acid catalysis to produce xylamidine. See also BW-501C67 AL-34662 VU0530244 References 3-Methoxyphenyl compounds 5-HT1A antagonists 5-HT2A antagonists 5-HT2C antagonists Amidines Peripherally selective drugs Phenethylamines
Xylamidine
Chemistry
377
4,958,191
https://en.wikipedia.org/wiki/Rose%20bengal
Rose bengal (4,5,6,7-tetrachloro-2',4',5',7'-tetraiodofluorescein) is a stain. Rose bengal belongs to the class of organic compounds called xanthenes. Its sodium salt is commonly used in eye drops to stain damaged conjunctival and corneal cells and thereby identify damage to the eye. The stain is also used in the preparation of Foraminifera for microscopic analysis, allowing the distinction between forms that were alive or dead at the time of collection. A form of rose bengal is also being studied as a treatment for certain cancers and skin conditions. The cancer formulation of the drug, known as PV-10, is currently undergoing clinical trials for melanoma, breast cancer. and neuroendocrine tumors. The company also has formulated a drug based on rose bengal for the treatment of eczema and psoriasis; this drug, PV-10, is currently in clinical trials as well. History and etymology Rose bengal was originally prepared in 1882 by Swiss chemist Robert Ghnem, as an analogue of fluorescein. Rudolf Nietzki at the University of Basel identified the principal constituents of rose bengal as iodine derivatives of di- and tetra-chlorofluorescein. The compound was originally used as a wool dye. Its name derives from rose (flower) and Bengal (region); it is printed as rose bengal or Rose Bengal in the scientific literature. Chemical applications Despite its complicated photochemistry involving several species, rose bengal is also used in synthetic chemistry as a visible light photoredox catalyst and to generate singlet oxygen from triplet oxygen. The singlet oxygen can then undergo a variety of useful reactions, particularly [2 + 2] cycloadditions with alkenes and similar systems. Derivatives and salts Rose bengal can be used to form many derivatives that have important medical functions. One such derivative was created so to be sonosensitive but photoinsensitive, so that with a high intensity focused ultrasound, it could be used in the treatment of cancer. The derivative was formed by amidation of rose bengal, which turned off the fluorescent and photosensitive properties of rose bengal, leading to a usable compound, named in the study as RB2. Salts of rose bengal include C20H2Cl4I4Na2O5 (CAS 632-69-9). This sodium salt is a dye, which has its own unique properties and uses. Biological applications PV-10 (an injectable form of rose bengal) was found to cause an observable response in 60% of tumors treated, according to researchers in a phase II melanoma study. Locoregional disease control was observed in 75% of patients. Also confirmed was a "bystander effect", previously observed in the phase I trial, whereby untreated lesions responded to treatment as well, potentially due to immune system response. These data were based on the interim results (in 2009) of the first 40 patients treated in an 80-patient study. a phase-3 study of PV-10 as a single agent therapy for patients with locally advanced cutaneous melanoma (Clinical Trials ID NCT02288897) is enrolling patients. Rose bengal has been shown to not just prevent the growth and spread of ovarian cancer, but also to cause apoptotic cell death of the cancer cells. This has been proven in vitro, in order to prove that rose bengal is still a possible option in the treatment of cancer, and further research should be done. Rose bengal has been used to treat colon cancer. In one such study, a protective immune response was generated from immunogenic cell death. Rose bengal is also used in animal models of ischemic stroke (photothrombotic stroke models) in biomedical research. A bolus of the compound is injected into the venous system. Then the region of interest (e.g., the cerebral cortex) is exposed and illuminated by LASER light of 561 nm. A thrombus is formed in the illuminated blood vessels, causing a stroke in the dependent brain tissue. Rose bengal has been used for 50 years to diagnose liver and eye cancer. Rose bengal dye is mixed with the homogenate of Brucella and pH of the solution is maintained at 3.8, and this dye is used to diagnose Brucellosis by agglutinating the suspected serum. Rose bengal is slightly irritating and toxic to the eye. It has also been used as an insecticide. Rose bengal is able to stain cells whenever the surface epithelium is not being properly protected by the preocular tear film, because rose bengal has been proven to not be able to stain cells because of the protective functioning of these preocular tear films. This is why rose bengal is often useful as a stain in diagnosing certain medical issues, such as conjunctival and lid disorders. Rose bengal has been used for ocular surface staining to study the efficacy of punctal plugs in the treatment of keratoconjunctivitis sicca. Rose bengal is being researched as an agent in creating nano sutures. Wounds are painted on both sides with it and then illuminated with an intense light. This links the tiny collagen fibers together sealing the wound. Healing is faster and the seal reduces chances of infection. Rose bengal is used to suppress bacterial growth in several microbiological media, including Cooke's rose bengal agar. Rose bengal has been used as a protoplasm stain to discriminate between living and dead micro-organisms, particularly Foraminifera, since the 1950s when Bill Walton developed the technique. Rose bengal acetate can act as a photosensitiser and may have potential in photodynamic therapy to treat some cancers. References External links Absorption and extinction data Ophthalmology Triarylmethane dyes Hydroxyarenes Chlorobenzene derivatives Iodobenzene derivatives Spiro compounds VMAT inhibitors Acid dyes
Rose bengal
Chemistry
1,257
2,954,287
https://en.wikipedia.org/wiki/Objectory
Objectory is an object-oriented methodology mostly created by Ivar Jacobson, who has greatly contributed to object-oriented software engineering. The framework of objectory is a design technique called design with building blocks. With the building block technique, a system is viewed as a system of connecting blocks with each block representing a system service. It is considered to be the first commercially available object-oriented methodology for developing large-scale industrial systems. This approach gives a global view of the software development and focuses on cost efficiency. Its main techniques are: conceptual modelling, Object-oriented programming, and a block design technique. References Object-oriented programming
Objectory
Engineering
129
42,179,990
https://en.wikipedia.org/wiki/Per%20Vilhelm%20Br%C3%BCel
Per Vilhelm Brüel (6 March 1915 – 2 April 2015) was a Danish physicist and engineer who pioneered and made fundamental contributions to the development of the physics of sound and vibration. He also formed and founded the world's largest manufacturer and supplier of sound and vibration measurement equipment, systems and solutions, Brüel & Kjær. Brüel was a close friend of Niels Bohr, and despite danger Brüel traveled from Sweden to Denmark during the German occupation with important documents of Bohr's work. Brüel was fluent in Danish, German, English, Swedish, and spoke French and Italian. Brüel was a descendant of the Brüel branch of the German noble family, the von Brühl family. Early years Brüel was born in Copenhagen as the eldest son to his family. Brüel's father was a forester, a tradition that he intended his son to continue. However, Brüel did not like the idea of becoming a forester, causing a family scandal. The family lived in the South of Jutland, away from schools and towns. When Brüel got older, he was sent away for "blacksmith" education, which then denoted a practical education in engineering. But he decided to attend university and reallocated to the technical university in Copenhagen. There Brüel pursued aerodynamics, electronics, and acoustics. Early career At the Technical University of Denmark, Brüel started working on his Ph.D., which today would be equivalent to Master of Science, in 1932, and finished it in about five months. Brüel's mentor was P.O. Pedersen, a famous Danish engineer and physicist, and Brüel was handpicked by Pedersen to work with him. Brüel later described Pedersen as brilliant, but that he took credit for Brüel's and his other students' work. In January 1939, Brüel was drafted into the Danish army to do the radio for the military for a year, and it was there he built his first instrument, a battery-operated, constant-percentage bandwidth analyzer. In the end of 1942, due to German occupation of Denmark, Brüel went to work in Sweden. He there went on to do important work in both Sweden and Finland, including constructing an acoustic lab at Chalmers University. BRÜEL & KJÆR and the Second World War In 1942, Brüel started the company Brüel & Kjær with his old friend Viggo Kjær. The company experienced immediate growth, and they quickly expanded their sales throughout Scandinavia. However, due to WW2, they soon had shortage of copper. Luckily, Brüel had a good friend who was trying to cause problems for the Germans, and informed Brüel about Germans' thick communication cable from Copenhagen to Berlin that was in the sea, and together they picked it up, destroying Hitler's communication with his troops for some time. In 1945, Holger Nielsen became a partner in the firm, for which he worked until his death in 1978. Despite the ongoing world war, Brüel managed to navigate his company and career to further success. In 1948, Brüel & Kjær bought their first property, a wooden army barracks in Nærum, 15 km north of Copenhagen, which are the premises where the company's headquarter is still located today. Brüel was responsible for product planning and research. As long as the market was limited to Denmark and Sweden, Brüel initially delivered instruments to customers on his Nimbus motorcycle. He still has this original antique in the basement of his home. When the rest of Scandinavia and some parts of Europe joined the market, cars were needed. By the 1980s, there were about 30 company cars. In 1956, with customers in the rest of Europe and in some other parts of the world, "Brüel & Kjær Airlines" was established. The company eventually maintained a substantial fleet of aircraft, including two Piper Aztecs and two Beechcraft King Airs. This made it possible for Brüel and other company pilots to make airborne deliveries directly to any customer that was located near a general aviation airport. The fleet also provided high-speed transportation, independent of the scheduled airlines, to employees and customers. When Brüel was flying as PIC (pilot in command), he would often appoint a nontrained employee to be a co-pilot on some of his trips. Brüel often blamed (in jest) these hapless individuals for doing a lousy piloting job. In the mid 80s, a number of employees were asked to get a pilot license to fly with Brüel due to concern with his advancing age. He subsequently solved this issue by hiring his flight instructor as his personal co-pilot. To the consternation of passengers, he was even older than Brüel Kjær used a bicycle as primary transportation between home and work, while Brüel used a company aircraft for commuting to and from his summer house on Anholt Island. In regard to automobiles, Nielsen used a Jaguar while Brüel had a Fiat 500. When asked why he preferred the Fiat to the other more comfortable company cars, Brüel answered that the comfort in a Fiat 500 was so poor that he could keep the car for himself; nobody in the company would ever ask him for a lift. In 1953, Brüel & Kjær was part of the first European corporation delegation into China. Brüel died at the age of 100 on 2 April 2015. Friendship with Niels Bohr Brüel was a close friend of Niels Bohr and frequently attended dinners at Bohr's house, where they would dine and discuss physics. When Brüel and Bohr both lived in Sweden during WW2, Brüel neglected apparent danger and flew to Denmark from time to time, carrying his friend Bohr's documents of his important work on quantum physics. Legacy Brüel was instrumental in the construction of the level recorder, which would become one of the most successful products for his company Brüel & Kjær. Brüel was a pioneering engineer within acoustics, and started one of the first companies focused on acoustics. He celebrated his 100th birthday in March 2015. Brüel & Kjær's pioneering achievements: 1940s – Various precision measurement instruments including radio frequency analyzers and Geiger counters. 1970s – Development of parallel analyzers, including the world's first analyzer to use digital filters. 1980s – Introduction of the first commercially available instrument for sound intensity measurement and the expansion of the range of instruments. 1990s – Multichannel and multi-analysis systems, including acoustical holography array systems. 2000s – Surface microphones and new technologies such as TEDS, Dyn-X, REq- X, and LAN-XI. Publications References (1) Mowry, Jackson, and Borring, Ghita, book "The Journey to Greatness: the Story of Brüel & Kjær", Acoustical Publications, Inc., Bay Village, 2012. (2) http://www.ieeeghn.org/wiki/index.php/Oral-History:Per_Bruel (3) http://www.sandv.com/downloads/0808gade.pdf (4) https://www.youtube.com/watch?v=vOJWbidWzmo (5) http://www.bksv.com/aboutus/aboutbruelandkjaer/history 1915 births 2015 deaths 20th-century Danish engineers Danish men centenarians Danish physicists Scientists from Copenhagen Quantum physicists University of Copenhagen alumni Danish people of World War II
Per Vilhelm Brüel
Physics
1,568
1,519,526
https://en.wikipedia.org/wiki/Antitropical%20distribution
Antitropical (alternatives include biantitropical or amphitropical) distribution is a type of disjunct distribution where a species or clade exists at comparable latitudes across the equator but not in the tropics. For example, a species may be found north of the Tropic of Cancer and south of the Tropic of Capricorn, but not in between. With increasing time since dispersal, the disjunct populations may be the same variety, species, or clade. How the life forms distribute themselves to the opposite hemisphere when they can't normally survive in the middle depends on the species; plants may have their seed spread through wind, animal, or other methods and then germinate upon reaching the appropriate climate, while sea life may be able to travel through the tropical regions in a larval state or by going through deep ocean currents with much colder temperatures than on the surface. For the American amphitropical distribution, dispersal has been generally agreed to be more likely than vicariance from a previous distribution including the tropics in North and South America. Known cases Plants Phacelia crenulata – scorpionweed Bowlesia incana – American Bowlesia Osmorhiza berteroi and Osmorhiza depauperata – sweet cecily species. Ruppia megacarpa Solenogyne For a list of American amphitropically distributed plants (237 vascular plants), see the tables in the open access paper Simpson et al. 2017 or their working group on figshare Animals Scylla serrata – mud crab Freshwater crayfish Ground beetle genus Bembidion Bryophytes and lichens Tetraplodon fuegianus - dung moss See also Rapoport's rule References Biogeography
Antitropical distribution
Biology
368
64,959,394
https://en.wikipedia.org/wiki/Energy%20diplomacy
Energy diplomacy is a form of diplomacy, and a subfield of international relations. It is closely related to its principal, foreign policy, and to overall national security, specifically energy security. Energy diplomacy began in the first half of the twentieth century and emerged as a term during the second oil crisis as a means of describing OPEC's actions. It has since mainly focused on the securitization of energy supplies, primarily fossil fuels, but also nuclear energy and increasingly sustainable energy, on a country or bloc basis. Background Energy diplomacy emerged as a term during the second oil crisis as a means of describing OPEC's actions and of characterizing the quest for the United States to secure energy independence and the Cold War relationship between Russia and satellite states regarding oil and gas exports. Since the oil crises, energy diplomacy has mainly focused on the securitization of energy supplies on a country or bloc basis and on the foreign policy to obtain that energy security. Ontological relationship with national security, foreign policy, and energy security Foreign politics has been around for thousands of years of our civilization, while energy has only entered in the last 150 years. However, in that period foreign policy and energy have had an increasing number of overlapping and interconnected elements. Foreign policy in its own part is closely linked and dependent on the concept of national security. National security is a principle of actions governing relations of one state with others based on geography, external threats and other national security challenges, of which energy is one. The three concepts, national security, foreign policy and energy security are ontologically structured, where national security is the most general concept, foreign policy is one level lower covering the international aspect of national security risks, and the lowest on the scale is energy diplomacy. Foreign policy is linked to national security as it is the tool which implements overall national security. National security also has a direct link to energy diplomacy. National security denotes the capability of a nation to overcome its internal and external multi-dimensional threats by balancing all instruments of state policy through governance. It aims to protect national independence, security and territorial, political and economic integrity, dealing with a large number of national security risks. Energy is one of the fundamental items on the national security agenda. National security that deals with such external issues and risks is applied and implemented by government departments for external relations. Implementation of the national security strategy involving external factors and international issues is carried out through foreign policy instruments, namely international relations and diplomacy. Energy diplomacy specifically focuses on external energy relations. Despite the ontological hierarchy of the three concepts, it is a recurring theme for them to continuously intersect in practical diplomatic life and the geopolitical reality. History The beginning of the 20th century was the early era of energy diplomacy, which was largely marked by corporate players. Such diplomacy was dominated by the corporations that produced and distributed fossil fuel, rather than sovereign governments, as in the case of Royal Dutch Shell and Standard Oil. National security on a national level as a concept in its own right had not yet been formulated, but the energy issues were increasing in importance. Carving up the global oil reserves and markets was carried out persistently, alike during the 1908 negotiations between Royal Dutch Shell head Deterding and the US Standard Oil director Teagle; or on the occasion of signing the US “As-Is” Pool Association agreement in 1928. The corporations were competing and racing over privileges, quotas and allocations. The governments were not too far behind, supporting them and often facilitating the race, but the influential corporations dominantly shaped the industry and foreign policy. The Post World War II era experienced fall of empires, rise of colonies, global shifts in geopolitical influence of UK, US, Russia and others. It is the OPEC that has succeeded in the 1960s and 1970s to gain ground in relation to the international oil corporations, nationalizing and regaining control over the national fossil fuel resources in several large producing countries. The oil shocks after WWII were the ones that greatly contributed to the growth of security concerns and diplomatic efforts in the energy sphere. The most important occurrences were the Suez Crisis of 1956-1957 and the OPEC oil embargo of 1973–1974. Whole economies were brought near to a standstill, escalating energy issues as top security concerns. Soon came other disruptions, albeit smaller, caused by the Iranian revolution of 1979, the Iran-Iraq War of 1980 followed by the first Persian Gulf War in 1990–1991. Turbulences on the oil market that disturbed and endangered economies were also caused by the 2003 Iraq invasion, oil price spike of 2007-2008, Russian Ukrainian gas dispute in 2009, and others including smaller disruptions. Oil passages are still a global security concern as 40% of all oil transits via four conduits of the straits of Hormuz, Malacca, Bab-el-Mandeb and the Suez Canal. International Energy Agency (IEA) expects that these quantities will rise from 40% to 60% by 2030. Any longer interruption would cause another large-scale economic downfall. Therefore, energy diplomacy has entered the domain of foreign policy through the national security passageway. Numerous grave national and international risks associated with energy security and energy diplomacy have paved this way and assured that energy is viewed and judged as a security concern, so it acquired all the features of a security issue, and is constantly monitored for level of risk, potential prevention or intervention in the diplomatic field. Next to the security path, energy concerns have entered foreign policy considerations via another path, the economy. A valid example is Australia, which has in 2018 decided to form a new policy body titled energy diplomacy. Australia, being by far the largest global exporter of coal, has only been mildly affected by the shifts on the market and geopolitics of energy, so its security risk concerning energy has not been very high. The rise of energy risks and main issues Energy diplomacy is a growing diplomatic field, aimed at providing energy security. Energy has entered the sphere of diplomacy and foreign policy as a result of its rising impact on national security and economy. Energy, the ability to do any work, powers the economy. Its uninterrupted flow, inward for importing countries, and outward for exporting, must be secured at all times. Until the last few decades of the 20th century the question of energy was not treated as a matter of such urgency nor geopolitics. The availability, affordability and supply were not a security issue. The industrial production and consumption capacities were smaller, and movement of energy was generally safe and dependable. Throughout the industrial revolution the increasing need for energy grew at a remarkable pace, spiraling in the 20th century. Only in the last 50 years, between 1971 and 2017 world total primary energy supply grew by more than 250% from 5, 519 Mtoe to 13, 972 Mtoe. Energy use worldwide is yet to grow by one-third until 2040. The changed situation generated a series of factors that required energy security and energy diplomacy to be elevated onto the national security agenda. National security departments worldwide closely monitor the severe escalation of energy use. The modern consumer and the contemporary economy have gradually grown to critically depend on energy. Hence, economy and energy have become inseparable concepts. Energy has become a synonym for the economy and power, and not having enough of it became a concern of the utmost national security. Access to energy resources has decided on war outcomes, security of supply shaped national and international agendas, oil and gas producing countries organized together into coalitions, tapping into the newly discovered energy resources to back their political and geopolitical goals. Oil and gas companies became some of the most influential organizations in the global business and power-influencing arena. Oil price volatility caused by oil shocks spelled economic fortunes or disasters for many participants in the international arena affecting national and geopolitical strategies. The economic consequences were considerable, so energy had to be included on the list of security and foreign policy issues of states. Nature of energy diplomacy Energy diplomacy refers to diplomatic activities designed to enhance access to energy resources and markets. It is a system of influencing the policies, resolutions and conduct of foreign governments and other international factors by means of diplomatic dialogue, negotiation, lobbying, advocacy and other peaceful methods. The general relationship between foreign policy and energy diplomacy is conceptually one of principal and agent. Foreign policy sets the goals and overall political strategy while energy diplomacy is a mechanism for achieving the goals. Energy diplomacy is an instrument of foreign policy. The purpose of energy diplomacy is to safeguard economic and energy security. Energy diplomacy channels economic and trade relations of a state with other states and organizations safeguarding Energy security through availability, reliability and affordability. Diplomatic efforts aimed at providing energy security grew in importance and complexity. It matured and spun off from general foreign policy and public diplomacy into a separate diplomatic niche field, energy diplomacy, mostly after the 1970s oil crises. This diplomatic activity has several other popular names like "geopetroleum politics", or "petro–politics" (Dorraj and Currier, 2011), or pipeline diplomacy (Aalto, 2008), but it mostly covers the same field. Energy diplomacy has developed its own programs, goals, instruments, tactics and action plans, such as the European Union Energy Diplomacy Action Plan. Thus, at the institutional level, energy diplomacy typically focuses on such topics as targets and guidelines; regulations and energy saving; the development of nuclear energy; research and development and demonstration; oil sharing; energy transportation; energy exploration; energy early warning and response; and, in the context of global warming, energy sustainability and energy transition for hydrocarbon exporting states. Commercial energy diplomacy, a hybrid of commercial diplomacy and energy diplomacy, involves political support for foreign-investing energy businesses. Energy diplomacy employs foreign policy methods to ensure a steady flow of energy and security of energy supplies. Energy producing and energy consuming countries apply them differently. Energy producing states mostly focus on using energy diplomacy to expand their exports and presence on the global markets. The example is the energy diplomacy of an exporting state, Russia, who aims to secure access to buyers for oil and gas. It is similar with the energy diplomacy of the Organization of the Petroleum Exporting Countries (OPEC), whose focus is similarly export and keeping external demand. Energy consuming and importing states apply energy diplomacy to secure energy supplies and steady inflow, like China's oil diplomacy in Africa or more recently, with Iran. There are also hybrid strategies, which are retained by states that are both large consumers and producers; such are India and the United States. Energy diplomacy and the energy transition Although the integration of energy diplomacy into foreign policy for some states has been security and the others economy, the energy transition is reshaping those dynamics so that questions of security and economy will follow a new geopolitical reality. The dynamics of the relationship with foreign policy and national security is thus undergoing a fundamental change—energy transition. Providing energy security has traditionally included several key notions: availability, reliability and affordability, but in the past two decades another crucial aspect is added – environmental sustainability and transition to low carbon energy. This has initiated a huge shift in how energy is perceived, its toll on the environment and it prompted policies to curb climate change. It was spearheaded by policy makers in the EU. With the proliferation of more renewable energy in the energy mix, like solar, tidal, energy efficiency, wind or water, the geography of resources will not be limited to only a few resource rich countries, but much more evenly spread throughout the world. The way national energy risks are perceived is gradually changing, as energy availability will be significantly improved and more prevalent all over the planet. The energy transition into low carbon energy is already shaping the dynamic relationship of geopolitics, national security strategies, foreign policies and energy diplomacy. Various scholars argue that renewable energy may cause more small-scale conflicts but reduce the risk of large conflicts between states. Energy diplomacy by country or bloc Arab states of the Persian Gulf Hydrocarbon exporting states in the Persian Gulf, such as those of the Gulf Cooperation Council, traditionally reliant on oil export and often members of OPEC, are increasingly seeking bilateral relations which enable their ability to conduct an energy transition from fossil fuels to energy sustainability, including renewable energy and nuclear power. Australia Australia is considered an energy superpower. Its energy diplomacy focuses primarily on promoting fossil fuels, primarily coal, and securing export markets for them. European Union While the European Union's internal energy policy may be seen as an example of energy diplomacy between the member states, the European Union has been developing external energy policy over the past two decades, via its EU Energy Diplomacy Action Plan, most notably with regard to Russia, Africa, and Eurasia, including across the Caspian basin. People's Republic of China The country on which much of the energy diplomacy literature has focused is China, due to its management of its fundamental energy insecurity, for instance in the relationship between national and corporate interests, as in its gas supply and infrastructure. China faces an energy supply deficit by 2030, and its energy diplomacy is guided by the strategic need to secure sufficient gas and oil supplies by this time. Given this situation, it first aggressively attempted application of the 'Beijing Consensus' to other countries via energy diplomacy, such as the BRICS bloc countries. China's energy diplomacy has covered a plethora of countries, such as, in the early years, Turkey, and in later years the Middle East and North Africa, with special regard to the Iran and Saudi Arabia conflict, where China's role in peace-building came under scrutiny. China's energy diplomacy with South American countries such as Brazil is an issue, as is its relationship with Russia, which can be examined at the levels of personalism and institutionalism. At the heart of China's energy diplomacy as regards the West and indeed the world is the issue of whether China's struggle for energy security will result in energy diplomacy behavior normalization through economic interdependence or whether China will continue to practice resource neo-mercantilism and power politics. Global energy governance institutions such as the International Energy Agency continue to look for responsible domestic energy governance from China, while China has switched attention from trying to impose its leadership on BRICS to developing its own "Silk Road Economic Belt", in part via the Shanghai Cooperation Organisation, as a means to obtain energy imports. Russia Russian energy diplomacy is mainly focused on its relationship with Europe, especially over natural gas supply, including across Eurasia, and Russia has combined energy supply with cyber and maritime power as policy instruments. Russia also pursues nuclear energy diplomacy, for instance with Finland and Hungary, via Rosatom. United States United States (US) energy diplomacy has consistently focused on oil, and more recently on the oil and gas boom, and is coordinated by the Bureau of Energy Resources at the Department of State. Its commercial energy diplomacy interests extend widely, beyond the traditional Middle East oil exporters to Central Asian countries such as Kazakhstan. Historically, the US has exported nuclear energy reactors, by building on its Atoms for Peace program exporting research reactors. See also 1973 oil crisis 1979 oil crisis 2000s energy crisis Energy policy Energy security Energy superpower Gulf War International Energy Agency ITER OPEC Suez crisis Commercial diplomacy Medical diplomacy Public diplomacy Defence diplomacy References Further reading Abelson, P.H. (1976), Energy diplomacy. Science, 192(4238), 429. Chi Zhang (2016), The Domestic Dynamics Of China's Energy Diplomacy, World Scientific Publishing Co. Maness, R., Valeriano, B. (2015), Russia's Coercive Diplomacy: Energy, Cyber, and Maritime Policy as New Sources of Power, Palgrave Macmillan. External links United Nations Sustainable Energy Diplomacy (page) International relations Types of diplomacy Petroleum politics Energy security Energy treaties
Energy diplomacy
Chemistry
3,206
482,847
https://en.wikipedia.org/wiki/Panchagavya
Panchagavya or panchakavyam is a mixture used in traditional Hindu rituals that is prepared by mixing five ingredients. The three direct constituents are cow dung, cow urine, and milk; the two derived products are curd and ghee. These are mixed and then allowed to ferment. The Sanskrit word panchagavya means "five cow-derivatives". When used in Ayurvedic medicine, it is also called cowpathy. Risks Proponents claim that cow urine therapy is capable of curing several diseases, including certain types of cancer, although these claims have no scientific backing. In fact, studies concerning ingesting individual components of panchagavya, such as cow urine, have shown no positive benefit, and significant side effects, including convulsion, depressed respiration, and death. Cow urine can also be a source of harmful bacteria and infectious diseases, including leptospirosis. Non-medicinal applications Panchgavya is used as a fertilizer and pesticide in agricultural operations. Proponents claim that it is a growth promoter in the poultry diet, that it is capable of increasing the growth of plankton for fish feed, and that it increases the production of milk in cows, increases the weight of pigs, and increases the egg laying capacity of poultry. It is sometimes used as a base in cosmetic products. Religious Customs It was reported by the Indian Antiquary in June 1895 (pages 168-169) that cow-dung had general use in Brahman purifications and was eaten by Hindus as an atonement for sin: "Cow-dung and cow-urine, with milk, curds and butter, form the five cow-products, which are worshipped in South India. New earthen pots, are cleansed by pouring into them the five cow products - milk, curds, butter, dung and urine. The five pots are set on darba grass and worshipped. They are called the god Panchgavia, and the worshipper thinks on their merit and good qualities, lays flowers on them, and mentally presents them with a golden throne. Water is sprinkled and waved over them. They are crowned with coloured rice, and are mentally presented with jewels, rich dresses, and sandal wood. Flowers, incense, a burning lamp, plantains, and betel are offered, a low bow is made, and the following prayer repeated: "Panchgaviâ, forgive our sins and the sins of all beings who sacrifice to you and who drink you. You have come from the body of the cow; therefore I pray you to forgive my sins and to cleanse my body. Cleanse me, who offer you worship, from my sins. Pardon and save me." After a second bow and the meditation of Hari, the five products are mixed in one cup; the priest drinks a little, pours it into the hollow hands of the worshippers and they drink. Nothing is so cleansing as this mixture. All Indians often drink it. The five nectars - milk, curds, butter, sugar and honey - are good, but much less powerful." The ancient Mahabharata epic relates that Shri, the Hindu goddess of fortune and prosperity, invisibly resides in the urine and dung of cows. See also List of unproven and disproven cancer treatments Prasada Traditional Knowledge Digital Library Urine therapy References Agriculture in India Alternative cancer treatments Animal products Ayurvedic medicaments Pseudoscience
Panchagavya
Chemistry
714
20,209,393
https://en.wikipedia.org/wiki/Djenkolic%20acid
Djenkolic acid (or sometimes jengkolic acid) is a sulfur-containing non-protein amino acid naturally found in the djenkol beans of the Southeast Asian plant Archidendron jiringa. Its chemical structure is similar to cystine but contains a methylene (single carbon) unit in between the two sulfur atoms. There is about 20 grams of djenkolic acid per kilogram of dry djenkol beans, and it has also been reported in smaller amounts in the seeds of other leguminous plants such as Leucaena esculenta (2.2 g/kg) and Pithecolobium ondulatum (2.8 g/kg). Toxicity The toxicity of djenkolic acid in humans arises from its poor solubility under acidic conditions after consumption of the djenkol bean. The amino acid precipitates into crystals which cause mechanical irritation of the renal tubules and urinary tract, resulting in symptoms such as abdominal discomfort, loin pains, severe colic, nausea, vomiting, dysuria, gross hematuria, and oliguria, occurring 2 to 6 hours after the beans were ingested. Urine analysis of patients reveals erythrocytes, epithelial cells, protein, and the needle-like crystals of djenkolic acid. Urolithiasis can also happen, with djenkolic acid as the nucleus. In young children, it has also been reported to produce painful swelling of the genitalia. Treatment for this toxicity requires hydration to increase urine flow and alkalinization of urine by sodium bicarbonate. Furthermore, this poisoning can be prevented when consuming djenkol beans by boiling them beforehand, since djenkolic acid is removed from the beans. Discovery and synthesis Djenkolic acid was first isolated by Van Veen and Hyman in 1933 from the urine of the natives of Java who had eaten the djenkol bean and were suffering from poisoning. They then isolated the djenkolic acid crystals by treating the djenkol beans with barium hydroxide at 30°C for a prolonged period. Du Vigneaud and Patterson managed to synthesize djenkolic acid by condensation of methylene chloride with 2 moles of L-cysteine in liquid ammonia. Later on, Armstrong and du Vigneaud prepared djenkolic acid by the direct combination of 1 mole of formaldehyde with 2 moles of L-cysteine in a strongly acidic solution. References Alpha-Amino acids Sulfur amino acids Plant toxins Toxic amino acids
Djenkolic acid
Chemistry
526
3,238,937
https://en.wikipedia.org/wiki/Weyl%20curvature%20hypothesis
The Weyl curvature hypothesis, which arises in the application of Albert Einstein's general theory of relativity to physical cosmology, was introduced by the British mathematician and theoretical physicist Roger Penrose in an article in 1979 in an attempt to provide explanations for two of the most fundamental issues in physics. On the one hand, one would like to account for a universe which on its largest observational scales appears remarkably spatially homogeneous and isotropic in its physical properties (and so can be described by a simple Friedmann–Lemaître model); on the other hand, there is the deep question on the origin of the second law of thermodynamics. Penrose suggests that the resolution of both of these problems is rooted in a concept of the entropy content of gravitational fields. Near the initial cosmological singularity (the Big Bang), he proposes, the entropy content of the cosmological gravitational field was extremely low (compared to what it theoretically could have been), and started rising monotonically thereafter. This process manifested itself e.g. in the formation of structure through the clumping of matter to form galaxies and clusters of galaxies. Penrose associates the initial low entropy content of the universe or the past hypothesis with the effective vanishing of the Weyl curvature tensor of the cosmological gravitational field near the Big Bang. From then on, he proposes, its dynamical influence gradually increased, thus being responsible for an overall increase in the amount of entropy in the universe, and so inducing a cosmological arrow of time. The Weyl curvature represents such gravitational effects as tidal fields and gravitational radiation. Mathematical treatments of Penrose's ideas on the Weyl curvature hypothesis have been given in the context of isotropic initial cosmological singularities e.g. in the articles. Penrose views the Weyl curvature hypothesis as a physically more credible alternative to cosmic inflation (a hypothetical phase of accelerated expansion in the early life of the universe) in order to account for the presently observed almost spatial homogeneity and isotropy of our universe. See also Gravitational singularity Unsolved problems in physics White hole References Physical cosmology General relativity
Weyl curvature hypothesis
Physics,Astronomy
443
68,283,515
https://en.wikipedia.org/wiki/BIO-LGCA
In computational and mathematical biology, a biological lattice-gas cellular automaton (BIO-LGCA) is a discrete model for moving and interacting biological agents, a type of cellular automaton. The BIO-LGCA is based on the lattice-gas cellular automaton (LGCA) model used in fluid dynamics. A BIO-LGCA model describes cells and other motile biological agents as point particles moving on a discrete lattice, thereby interacting with nearby particles. Contrary to classic cellular automaton models, particles in BIO-LGCA are defined by their position and velocity. This allows to model and analyze active fluids and collective migration mediated primarily through changes in momentum, rather than density. BIO-LGCA applications include cancer invasion and cancer progression. Model definition As are all cellular automaton models, a BIO-LGCA model is defined by a lattice , a state space , a neighborhood , and a rule . The lattice () defines the set of all possible particle positions. Particles are restricted to occupy only certain positions, typically resulting from a regular and periodic tesselation of space. Mathematically, is a discrete subset of -dimensional space. The state space () describes the possible states of particles within every lattice site . In BIO-LGCA, multiple particles with different velocities may occupy a single lattice site, as opposed to classic cellular automaton models, where typically only a single cell can reside in every lattice node simultaneously. This makes the state space slightly more complex than that of classic cellular automaton models (see below). The neighborhood () indicates the subset of lattice sites which determines the dynamics of a given site in the lattice. Particles only interact with other particles within their neighborhood. Boundary conditions must be chosen for neighborhoods of lattice sites at the boundary of finite lattices. Neighborhoods and boundary conditions are identically defined as those for regular cellular automata (see Cellular automaton). The rule () dictates how particles move, proliferate, or die with time. As every cellular automaton, BIO-LGCA evolves in discrete time steps. In order to simulate the system dynamics, the rule is synchronously applied to every lattice site at every time step. Rule application changes the original state of a lattice site to a new state. The rule depends on the states of lattice sites in the interaction neighborhood of the lattice site to be updated. In BIO-LGCA, the rule is divided into two steps, a probabilistic interaction step followed by a deterministic transport step. The interaction step simulates reorientation, birth, and death processes, and is defined specifically for the modeled process. The transport step translocates particles to neighboring lattice nodes in the direction of their velocities. See below for details. State space For modeling particle velocities explicitly, lattice sites are assumed to have a specific substructure. Each lattice site is connected to its neighboring lattice sites through vectors called "velocity channels", , , where the number of velocity channels is equal to the number of nearest neighbors, and thus depends on the lattice geometry ( for a one-dimensional lattice, for a two-dimensional hexagonal lattice, and so on). In two dimensions, velocity channels are defined as . Additionally, an arbitrary number of so-called "rest channels" may be defined, such that , . A channel is said to be occupied if there is a particle in the lattice site with a velocity equal to the velocity channel. The occupation of channel is indicated by the occupation number . Typically, particles are assumed to obey an exclusion principle, such that no more than one particle may occupy a single velocity channel at a lattice site simultaneously. In this case, occupation numbers are Boolean variables, i.e. , and thus, every site has a maximum carrying capacity . Since the collection of all channel occupation numbers defines the number of particles and their velocities in each lattice site, the vector describes the state of a lattice site, and the state space is given by . Rule and model dynamics The states of every site in the lattice are updated synchronously in discrete time steps to simulate the model dynamics. The rule is divided into two steps. The probabilistic interaction step simulates particle interaction, while the deterministic transport step simulates particle movement. Interaction step Depending on the specific application, the interaction step may be composed of reaction and/or reorientation operators. The reaction operator replaces the state of a node with a new state following a transition probability , which depends on the state of the neighboring lattice sites to simulate the influence of neighboring particles on the reactive process. The reaction operator does not conserve particle number, thus allowing to simulate birth and death of individuals. The reaction operator's transition probability is usually defined ad hoc form phenomenological observations. The reorientation operator also replaces a state with a new state with probability . However, this operator conserves particle number and therefore only models changes in particle velocity by redistributing particles among velocity channels. The transition probability for this operator can be determined from statistical observations (by using the maximum caliber principle) or from known single-particle dynamics (using the discretized, steady-state angular probability distribution given by the Fokker-Planck equation associated to a Langevin equation describing the reorientation dynamics), and typically takes the form where is a normalization constant (also known as the partition function), is an energy-like function which particles will likely minimize when changing their direction of motion, is a free parameter inversely proportional to the randomness of particle reorientation (analogous to the inverse temperature in thermodynamics), and is a Kronecker delta which ensures that particle number before and after reorientation is unchanged. The state resulting form applying the reaction and reorientation operator is known as the post-interaction configuration and denoted by . Transport step After the interaction step, the deterministic transport step is applied synchronously to all lattice sites. The transport step simulates the movement of agents according to their velocity, due to the self-propulsion of living organisms. During this step, the occupation numbers of post-interaction states will be defined as the new occupation states of the same channel of the neighboring lattice site in the direction of the velocity channel, i.e. . A new time step begins when both interaction and transport steps have occurred. Therefore, the dynamics of the BIO-LGCA can be summarized as the stochastic finite-difference microdynamical equation Example interaction dynamics The transition probability for the reaction and/or reorientation operator must be defined to appropriately simulate the modeled system. Some elementary interactions and the corresponding transition probabilities are listed below. Random walk In the absence of any external or internal stimuli, cells may move randomly without any directional preference. In this case, the reorientation operator may be defined through a transition probability where . Such transition probability allows any post-reorientation configuration with the same number of particles as the pre-reorientation configuration , to be picked uniformly. Simple birth and death process If organisms reproduce and die independently of other individuals (with the exception of the finite carrying capacity), then a simple birth/death process can be simulated with a transition probability given by where , are constant birth and death probabilities, respectively, is the Kronecker delta which ensures only one birth/death event happens every time step, and is the Heaviside function, which makes sure particle numbers are positive and bounded by the carrying capacity . Adhesive interactions Cells may adhere to one another by cadherin molecules on the cell surface. Cadherin interactions allow cells to form aggregates. The formation of cell aggregates via adhesive biomolecules can be modeled by a reorientation operator with transition probabilities defined as where is a vector pointing in the direction of maximum cell density, defined as , where is the configuration of the lattice site within the neighborhood , and is the momentum of the post-reorientation configuration, defined as . This transition probability favors post-reorientation configurations with cells moving towards the cell density gradient. Mathematical analysis Since an exact treatment of a stochastic agent-based model quickly becomes unfeasible due to high-order correlations between all agents, the general method of analyzing a BIO-LGCA model is to cast it into an approximate, deterministic finite difference equation (FDE) describing the mean dynamics of the population, then performing the mathematical analysis of this approximate model, and comparing the results to the original BIO-LGCA model. First, the expected value of the microdynamical equation is obtained where denotes the expected value, and is the expected value of the -th channel occupation number of the lattice site at at time step . However, the term on the right, is highly nonlinear on the occupation numbers of both the lattice site and the lattice sites within the interaction neighborhood , due to the form of the transition probability and the statistics of particle placement within velocity channels (for example, arising from an exclusion principle imposed on channel occupations). This non-linearity would result in high-order correlations and moments among all channel occupations involved. Instead, a mean-field approximation is usually assumed, wherein all correlations and high order moments are neglected, such that direct particle-particle interactions are substituted by interactions with the respective expected values. In other words, if are random variables, and is a function, then under this approximation. Thus, we can simplify the equation to where is a nonlinear function of the expected lattice site configuration and the expected neighborhood configuration dependent on the transition probabilities and in-node particle statistics. From this nonlinear FDE, one may identify several homogeneous steady states, or constants independent of and which are solutions to the FDE. To study the stability conditions of these steady states and the pattern formation potential of the model, a linear stability analysis can be performed. To do so, the nonlinear FDE is linearized as where denotes the homogeneous steady state , and a von Neumann neighborhood was assumed. In order to cast it into a more familiar finite difference equation with temporal increments only, a discrete Fourier transform can be applied on both sides of the equation. After applying the shift theorem and isolating the term with a temporal increment on the left, one obtains the lattice-Boltzmann equation where is the imaginary unit, is the size of the lattice along one dimension, is the Fourier wave number, and denotes the discrete Fourier transform. In matrix notation, this equation is simplified to , where the matrix is called the Boltzmann propagator and is defined as The eigenvalues of the Boltzmann propagator dictate the stability properties of the steady state: If , where denotes the modulus, then perturbations with wave number grow with time. If , and , then perturbations with wave number will dominate and patterns with a clear wavelength will be observed. Otherwise, the steady state is stable and any perturbations will decay. If , where denotes the argument, then perturbations are transported and non-stationary population behaviors are observed. Otherwise, the population will appear static at the macroscopic level. Applications Constructing a BIO-LGCA for the study of biological phenomena mainly involves defining appropriate transition probabilities for the interaction operator, though precise definitions of the state space (to consider several cellular phenotypes, for example), boundary conditions (for modeling phenomena in confined conditions), neighborhood (to match experimental interaction ranges quantitatively), and carrying capacity (to simulate crowding effects for given cell sizes) may be important for specific applications. While the distribution of the reorientation operator can be obtained through the aforementioned statistical and biophysical methods, the distribution of the reaction operators can be estimated from the statistics of in vitro experiments, for example. BIO-LGCA models have been used to study several cellular, biophysical and medical phenomena. Some examples include: Angiogenesis: an in vitro experiment with endothelial cells and BIO-LGCA simulation observables were compared to determine the processes involved during angiogenesis and their weight. They found that adhesion, alignment, contact guidance, and ECM remodeling are all involved in angiogenesis, while long-range interactions are not vital to the process. Active fluids: the macroscopic physical properties of a population of particles interacting through polar alignment interactions were investigated using a BIO-LGCA model. It was found that increasing initial particle density and interaction strength result in a second order phase transition from a homogeneous, disordered state to an ordered, patterned, moving state. Epidemiology: a spatial SIR BIO-LGCA model was used to study the effect of different vaccination strategies, and the effect of approximating a spatial epidemic with a non-spatial model. They found that barrier-type vaccination strategies are much more effective than spatially uniform vaccination strategies. Furthermore, they found that non-spatial models greatly overestimate the rate of infection. Cell jamming: in vitro and Bio-LGCA models were used for studying metastatic behavior in breast cancer. The BIO-LGCA model revealed that metastasis may exhibit different behaviors, such as random gas-like, jammed solid-like, and correlated fluid-like states, depending on the adhesivity level among cells, ECM density, and cell-ECM interactions. References External links Bio-LGCA Simulator - An online simulator with elementary interactions with personalizable parameter values. BIO-LGCA Python Package - An open source Python package for implementing BIO-LGCA model simulations. Statistical mechanics Lattice models Stochastic models Complex dynamics
BIO-LGCA
Physics,Materials_science,Mathematics
2,826
13,871,411
https://en.wikipedia.org/wiki/Plasma-immersion%20ion%20implantation
Plasma-immersion ion implantation (PIII) or pulsed-plasma doping (pulsed PIII) is a surface modification technique of extracting the accelerated ions from the plasma by applying a high voltage pulsed DC or pure DC power supply and targeting them into a suitable substrate or electrode with a semiconductor wafer placed over it, so as to implant it with suitable dopants. The electrode is a cathode for an electropositive plasma, while it is an anode for an electronegative plasma. Plasma can be generated in a suitably designed vacuum chamber with the help of various plasma sources such as electron cyclotron resonance plasma source which yields plasma with the highest ion density and lowest contamination level, helicon plasma source, capacitively coupled plasma source, inductively coupled plasma source, DC glow discharge and metal vapor arc (for metallic species). The vacuum chamber can be of two types - diode and triode type depending upon whether the power supply is applied to the substrate as in the former case or to the perforated grid as in the latter. Working In a conventional immersion type of PIII system, also called as the diode type configuration, the wafer is kept at a negative potential since the positively charged ions of the electropositive plasma are the ones who get extracted and implanted. The wafer sample to be treated is placed on a sample holder in a vacuum chamber. The sample holder is connected to a high voltage power supply and is electrically insulated from the chamber wall. By means of pumping and gas feed systems, an atmosphere of a working gas at a suitable pressure is created. When the substrate is biased to a negative voltage (few KV's), the resultant electric field drives electrons away from the substrate in the time scale of the inverse electron plasma frequency ωe−1 ( ~ 10−9 sec). Thus an ion matrix Debye sheath which is depleted of electrons forms around it. The negatively biased substrate will accelerate the ions within a time scale of the inverse ion plasma frequency ωi−1 ( ~ 10−6 sec). This ion movement lowers the ion density in the bulk, which causes the sheath-plasma boundary to expand in order to sustain the applied potential drop, in the process exposing more ions. The plasma sheath expands until either a steady-state condition is reached, which is called Child Langmuir law limit; or the high voltage is switched off as in the case of Pulsed DC biasing. Pulse biasing is preferred over DC biasing because it creates less damage during the pulse ON time and neutralization of unwanted charges accumulated on the wafer in the afterglow period (i.e. after the pulse has ended). In case of pulsed biasing the TON time of the pulse is generally kept at 20-40 μs, while the TOFF is kept at 0.5-2 ms i.e. a duty cycle of 1-8%. The power supply used is in range of 500 V to hundreds of KV and the pressure in the range of 1-100 mTorr. This is the basic principle of the operation of immersion type PIII. In case of a triode type configuration, a suitable perforated grid is placed in between the substrate and the plasma and a pulsed DC bias is applied to this grid. Here the same theory applies as previously discussed, but with a difference that the extracted ions from the grid holes bombard the substrate, thus causing implantation. In this sense a triode type PIII implanter is a crude version of ion implantation because it does not contain plethora of components like ion beam steering, beam focusing, additional grid accelerators etc. References Other sources C.R. Viswanathan, "Plasma induced damage," Microelectronic Engineering, Vol. 49, No. 1-2, November 1999, pp. 65–81. ion implantation Semiconductor device fabrication Thin films Etching (microfabrication)
Plasma-immersion ion implantation
Materials_science,Mathematics,Engineering
810
22,445,549
https://en.wikipedia.org/wiki/Strong%20reciprocity
Strong reciprocity is an area of research in behavioral economics, evolutionary psychology, and evolutionary anthropology on the predisposition to cooperate even when there is no apparent benefit in doing so. This topic is particularly interesting to those studying the evolution of cooperation, as these behaviors seem to be in contradiction with predictions made by many models of cooperation. In response, current work on strong reciprocity is focused on developing evolutionary models which can account for this behavior. Critics of strong reciprocity argue that it is an artifact of lab experiments and does not reflect cooperative behavior in the real world. Evidence for strong reciprocity Experimental evidence A variety of studies from experimental economics provide evidence for strong reciprocity, either by demonstrating people's willingness to cooperate with others, or by demonstrating their willingness to take costs on themselves to punish those who do not. Evidence for cooperation One experimental game used to measure levels of cooperation is the dictator game. In the standard form of the dictator game, there are two anonymous unrelated participants. One participant is assigned the role of the allocator and the other the role of the recipient. The allocator is assigned some amount of money, which they can divide in any way they choose. If a participant is trying to maximize their payoff, the rational solution (nash equilibrium) for the allocator to assign nothing to the recipient. In a 2011 meta study of 616 dictator game studies, Engel found an average allocation of 28.3%, with 36% of participants giving nothing, 17% choosing the equal split, and 5.44% give the recipient everything. The trust game, an extension of the dictator game, provides additional evidence for strong reciprocity. The trust game extends the dictator game by multiplying the amount given by the allocator to the recipient by some value greater than one, and then allowing the recipient to give some amount back to the allocator. Once again in this case, if participants are trying to maximize their payoff, recipient should give nothing back to the allocator, and the allocator should assign nothing to the recipient. A 2009 meta analysis of 84 trust game studies revealed that the allocator gave an average of 51% and that the receiver returned an average of 37%. A third commonly used experiment used to demonstrate strong reciprocity preferences is the public goods game. In a public goods game, some number of participants are placed in a group. Each participant is given some amount of money. They are then allowed to contribute any of their allocation to a common pool. The common pool is then multiplied by some amount greater than one, then evenly redistributed to each participant, regardless of how much they contributed. In this game, for anyone trying to maximize their payoff, the rational nash equilibrium strategy is to not contribute anything. However, in a 2001 study, Fischbacher observed average contributions of 33.5%. Evidence for punishing non-cooperators The second component of strong reciprocity is that people are willing to punish those who fail to cooperate, even when punishment is costly. There are two types of punishment: second party and third party punishment. In second party punishment, the person who was hurt by the other parties' failure to cooperate has the opportunity to punish the non-cooperator. In third party punishment, an uninvolved third party has the opportunity to punish the non-cooperator. A common game used to measure willingness to engage in second party punishment is the ultimatum game. This game is very similar to the previously described dictator game in which the allocator divides a sum of money between himself and a recipient. In the ultimatum game, the recipient has the choice to either accept the offer or reject it, resulting in both players receiving nothing. If recipients are payoff maximizers, it is in the nash equilibrium for them to accept any offer, and it is therefore in the allocator's interest to offer as close to zero as possible. However, the experimental results show that the allocator usually offers over 40%, and is rejected by the recipient 16% of the time. Recipients are more likely to reject low offers rather than high offers. Another example of second party punishment is in public goods game as described earlier, but with a second stage added in which participants can pay to punish other participants. In this game, a payoff maximizer's rational strategy in nash equilibrium is not to punish and to not contribute. However, experimental results show that participants are willing to pay to punish those who deviate from the average level of contribution – so much so that it becomes disadvantageous to give a lower amount, which allows for sustained cooperation. Modifications of the dictator game and prisoner's dilemma provide support for the willingness to engage in costly third party punishment. The modified dictator game is exactly the same as the traditional dictator game but with a third party observing. After the allocator makes their decision, the third party has the opportunity to pay to punish the allocator. A payoff maximizing third party would choose not to punish, and a similarly rational allocator would choose to keep the entire sum for himself. However, experimental results show that a majority of third parties punish allocations less than 50% In the prisoner's dilemma with third party punishment, two of the participants play a prisoner's dilemma, in which each must choose to either cooperate or defect. The game is set up such that regardless of what the other player does, it is rational for an income maxizer to always choose to defect, even though both players cooperating yields a higher payoff than both players defecting. A third player observes this exchange, then can pay to punish either player. An income maximizing third parties' rational response would be to not punish, and income maximizing players would choose to defect. A 2004 study demonstrates that a near majority of participants (46%) are willing to pay to punish if one participant defects. If both parties defect, 21% are still willing to punish. Link from experiments to the field Other researchers have investigated to what extent these behavioral economic lab experiments on social preferences can be generalized to behavior in the field. In a 2011 study, Fehr and Leibbrandt examined the relationship between contributions in public goods games to participation in public goods in the community of shrimpers in Brazil. These shrimpers cut a hole in the bottom of their fishing bucket in order to allow immature shrimp to escape, thereby investing in the public good of the shared shrimp population. The size of the hole can be seen as the degree to which participants cooperate, as larger holes allow more shrimp to escape. Controlling for a number of other possible influences, Fehr and Leibbrandt demonstrated a positive relationship between hole size and contributions in the public goods game experiment. Rustagi and colleagues were able to demonstrate a similar effect with 49 groups of Bale Oromo herders in Ethiopia, who were participating in forest management. Results from public goods game experiments revealed more than one third of participant herders were conditional cooperators, meaning they cooperate with other cooperators. Rustagi et al. demonstrated that groups with larger amounts of conditional cooperators planted a larger number of trees. Ethnographic field evidence In addition to experimental results, ethnography collected by anthropologists describes strong reciprocity observed in the field. Records of the Turkana, an acephalous African pastoral group, demonstrate strong reciprocity behavior. If someone acts cowardly in combat or commits some other free-riding behavior, the group confers and decides if a violation has occurred. If they do decide a violation has occurred, corporal punishment is administered by the age cohort of the violator. Importantly, the age cohort taking the risks are not necessarily those who were harmed, making it costly third party punishment. The Walibri of Australia also exhibit third party costly punishment. The local community determines whether an act of homicide, adultery, theft, etc. was an offense. The community then appoints someone to carry out the punishment, and others to protect that person against retaliation. Data from the Aranda foragers of the Central Desert in Australia suggest this punishment can be very costly, as it carries with it the risk of retaliation from the family members of the punished, which can be as severe as homicide. Evolutionary models of cooperation which account for strong reciprocity A number of evolutionary models have been proposed in order to account for the existence of strong reciprocity. This section briefly touches on an important small subset of such models. The first model of strong reciprocity was proposed by Herbert Gintis in 2000, which contained a number of simplifying assumptions addressed in later models. In 2004, Samuel Bowles and Gintis presented a follow up model in which they incorporated cognitive, linguistic, and other capacities unique to humans in order to demonstrate how these might be harnessed to strengthen the power of social norms in large scale public goods games. In a 2001 model, Joe Henrich and Robert Boyd also build on Gintis' model by incorporating conformist transmission of cultural information, demonstrating that this can also stabilize cooperative groups norms. Boyd, Gintis, Bowles, and Peter Richerson's 2003 model of the evolution of third party punishment demonstrates how even though the logic underlying altruistic giving and altruistic punishment may be the same, the evolutionary dynamics are not. This model is the first to employ cultural group selection in order to select for better performing groups, while using norms to stabilize behavior within groups. Though many of the previously proposed models were both costly and uncoordinated, a 2010 model by Boyd, Gintis and Bowles presents a mechanism for coordinated costly punishment. In this quorum-sensing model, each agent chooses whether or not they are willing to engage in punishment. If a sufficient number of agents are willing to engage in punishment, then the group acts collectively to administer it. An important aspect of this model is that strong reciprocity is self-regarding when rare in the population, but may be altruistic when common within a group. Cross cultural variation Significant cross-cultural variation has been observed in strong reciprocity behavior. In 2001, dictator game experiments were run in a 15 small scale societies across the world. The results of the experiments showed dramatic variation, with some groups mean offer as little as 26% and some as great as 58%. The pattern of receiver results was also interesting, with some participants in some cultures rejecting offers above 50%. Henrich and colleagues determined that the best predictors of dictator game allocations were the size of the group (small groups giving less) and market integration (the more involved with markets, the more participants gave). This study was then repeated with a different 15 small scale societies and with better measures of market integration, finding a similar pattern of results. These results are consistent with the culture-gene coevolution hypothesis. A later paper by the same researchers identified religion as a third major contributor. Those people who participate in a world religion were more likely to exhibit strong reciprocity behavior. Criticisms A particularly prominent criticism of strong reciprocity theory is that it does not correspond to behavior found in the actual environment. In particular, the existence of third party punishment in the field is called into question. Some have responded to this criticism by pointing out that if effective, third party punishment will rarely be used, and will therefore be difficult to observe. Others have suggested that there is evidence of third party costly punishment in the field. Critics have responded to these claims by arguing that it is unfair for proponents to argue that both a demonstration of costly third party punishment as well as a lack of costly third party punishment are both evidence of its existence. They also question whether the ethnographic evidence presented is costly third party punishment, and call for additional analysis of the costs and benefits of the punishment. Other research has shown that different types of strong reciprocity do not predict other types of strong reciprocity within individuals. Implications The existence of strong reciprocity implies that systems developed based purely on material self-interest may be missing important motivators in the marketplace. This section gives two examples of possible implications. One area of application is in the design of incentive schemes. For example, standard contract theory has difficulty dealing with the degree of incompleteness in contracts and the lack of use of performance measures, even when they are cheap to implement. Strong reciprocity and models based on it suggest that this can be explained by people's willingness to act fairly, even when it is against their material self-interest. Experimental results suggest that this is indeed the case, with participants preferring less complete contracts, and workers willing to contribute a fair amount beyond what would be in their own self-interest. Another application of strong reciprocity is in allocating property rights and ownership structure. Joint ownership of property can be very similar to the public goods game, where owners can independently contribute to the common pool, which then returns on the investment and is evenly distributed to all parties. This ownership structure is subject to the tragedy of the commons, in which if all the parties are self-interested, no one will invest. Alternatively, property could be allocated in an owner and employee relationship, in which an employee is hired by the owner and paid a specific wage for a specific level of investment. Experimental studies show that participants generally prefer joint ownership, and do better under joint ownership than in the owner employee organization. See also Reciprocal altruism Reciprocity (evolution) Evolutionary psychology Sociobiology Dual inheritance theory Notable contributors Ernst Fehr Herbert Gintis Samuel Bowles Joseph Henrich Robert Boyd Peter Richerson References Further reading Behavioral economics Evolutionary biology Cultural anthropology
Strong reciprocity
Biology
2,770
6,556
https://en.wikipedia.org/wiki/Coprime%20integers
In number theory, two integers and are coprime, relatively prime or mutually prime if the only positive integer that is a divisor of both of them is 1. Consequently, any prime number that divides does not divide , and vice versa. This is equivalent to their greatest common divisor (GCD) being 1. One says also is prime to or is coprime with . The numbers 8 and 9 are coprime, despite the fact that neither—considered individually—is a prime number, since 1 is their only common divisor. On the other hand, 6 and 9 are not coprime, because they are both divisible by 3. The numerator and denominator of a reduced fraction are coprime, by definition. Notation and testing When the integers and are coprime, the standard way of expressing this fact in mathematical notation is to indicate that their greatest common divisor is one, by the formula or . In their 1989 textbook Concrete Mathematics, Ronald Graham, Donald Knuth, and Oren Patashnik proposed an alternative notation to indicate that and are relatively prime and that the term "prime" be used instead of coprime (as in is prime to ). A fast way to determine whether two numbers are coprime is given by the Euclidean algorithm and its faster variants such as binary GCD algorithm or Lehmer's GCD algorithm. The number of integers coprime with a positive integer , between 1 and , is given by Euler's totient function, also known as Euler's phi function, . A set of integers can also be called coprime if its elements share no common positive factor except 1. A stronger condition on a set of integers is pairwise coprime, which means that and are coprime for every pair of different integers in the set. The set is coprime, but it is not pairwise coprime since 2 and 4 are not relatively prime. Properties The numbers 1 and −1 are the only integers coprime with every integer, and they are the only integers that are coprime with 0. A number of conditions are equivalent to and being coprime: No prime number divides both and . There exist integers such that (see Bézout's identity). The integer has a multiplicative inverse modulo , meaning that there exists an integer such that . In ring-theoretic language, is a unit in the ring of integers modulo . Every pair of congruence relations for an unknown integer , of the form and , has a solution (Chinese remainder theorem); in fact the solutions are described by a single congruence relation modulo . The least common multiple of and is equal to their product , i.e. . As a consequence of the third point, if and are coprime and , then . That is, we may "divide by " when working modulo . Furthermore, if are both coprime with , then so is their product (i.e., modulo it is a product of invertible elements, and therefore invertible); this also follows from the first point by Euclid's lemma, which states that if a prime number divides a product , then divides at least one of the factors . As a consequence of the first point, if and are coprime, then so are any powers and . If and are coprime and divides the product , then divides . This can be viewed as a generalization of Euclid's lemma. The two integers and are coprime if and only if the point with coordinates in a Cartesian coordinate system would be "visible" via an unobstructed line of sight from the origin , in the sense that there is no point with integer coordinates anywhere on the line segment between the origin and . (See figure 1.) In a sense that can be made precise, the probability that two randomly chosen integers are coprime is , which is about 61% (see , below). Two natural numbers and are coprime if and only if the numbers and are coprime. As a generalization of this, following easily from the Euclidean algorithm in base : Coprimality in sets A set of integers can also be called coprime or setwise coprime if the greatest common divisor of all the elements of the set is 1. For example, the integers 6, 10, 15 are coprime because 1 is the only positive integer that divides all of them. If every pair in a set of integers is coprime, then the set is said to be pairwise coprime (or pairwise relatively prime, mutually coprime or mutually relatively prime). Pairwise coprimality is a stronger condition than setwise coprimality; every pairwise coprime finite set is also setwise coprime, but the reverse is not true. For example, the integers 4, 5, 6 are (setwise) coprime (because the only positive integer dividing all of them is 1), but they are not pairwise coprime (because ). The concept of pairwise coprimality is important as a hypothesis in many results in number theory, such as the Chinese remainder theorem. It is possible for an infinite set of integers to be pairwise coprime. Notable examples include the set of all prime numbers, the set of elements in Sylvester's sequence, and the set of all Fermat numbers. Coprimality in ring ideals Two ideals and in a commutative ring are called coprime (or comaximal) if This generalizes Bézout's identity: with this definition, two principal ideals () and () in the ring of integers are coprime if and only if and are coprime. If the ideals and of are coprime, then furthermore, if is a third ideal such that contains , then contains . The Chinese remainder theorem can be generalized to any commutative ring, using coprime ideals. Probability of coprimality Given two randomly chosen integers and , it is reasonable to ask how likely it is that and are coprime. In this determination, it is convenient to use the characterization that and are coprime if and only if no prime number divides both of them (see Fundamental theorem of arithmetic). Informally, the probability that any number is divisible by a prime (or in fact any integer) is for example, every 7th integer is divisible by 7. Hence the probability that two numbers are both divisible by is and the probability that at least one of them is not is Any finite collection of divisibility events associated to distinct primes is mutually independent. For example, in the case of two events, a number is divisible by primes and if and only if it is divisible by ; the latter event has probability If one makes the heuristic assumption that such reasoning can be extended to infinitely many divisibility events, one is led to guess that the probability that two numbers are coprime is given by a product over all primes, Here refers to the Riemann zeta function, the identity relating the product over primes to is an example of an Euler product, and the evaluation of as is the Basel problem, solved by Leonhard Euler in 1735. There is no way to choose a positive integer at random so that each positive integer occurs with equal probability, but statements about "randomly chosen integers" such as the ones above can be formalized by using the notion of natural density. For each positive integer , let be the probability that two randomly chosen numbers in are coprime. Although will never equal exactly, with work one can show that in the limit as the probability approaches . More generally, the probability of randomly chosen integers being setwise coprime is Generating all coprime pairs All pairs of positive coprime numbers (with ) can be arranged in two disjoint complete ternary trees, one tree starting from (for even–odd and odd–even pairs), and the other tree starting from (for odd–odd pairs). The children of each vertex are generated as follows: Branch 1: Branch 2: Branch 3: This scheme is exhaustive and non-redundant with no invalid members. This can be proved by remarking that, if is a coprime pair with then if then is a child of along branch 3; if then is a child of along branch 2; if then is a child of along branch 1. In all cases is a "smaller" coprime pair with This process of "computing the father" can stop only if either or In these cases, coprimality, implies that the pair is either or Another (much simpler) way to generate a tree of positive coprime pairs (with ) is by means of two generators and , starting with the root . The resulting binary tree, the Calkin–Wilf tree, is exhaustive and non-redundant, which can be seen as follows. Given a coprime pair one recursively applies or depending on which of them yields a positive coprime pair with . Since only one does, the tree is non-redundant. Since by this procedure one is bound to arrive at the root, the tree is exhaustive. Applications In machine design, an even, uniform gear wear is achieved by choosing the tooth counts of the two gears meshing together to be relatively prime. When a 1:1 gear ratio is desired, a gear relatively prime to the two equal-size gears may be inserted between them. In pre-computer cryptography, some Vernam cipher machines combined several loops of key tape of different lengths. Many rotor machines combine rotors of different numbers of teeth. Such combinations work best when the entire set of lengths are pairwise coprime. Generalizations This concept can be extended to other algebraic structures than for example, polynomials whose greatest common divisor is 1 are called coprime polynomials. See also Euclid's orchard Superpartient number Notes References Further reading . Number theory
Coprime integers
Mathematics
2,084
308,474
https://en.wikipedia.org/wiki/Non-uniform%20rational%20B-spline
Non-uniform rational basis spline (NURBS) is a mathematical model using basis splines (B-splines) that is commonly used in computer graphics for representing curves and surfaces. It offers great flexibility and precision for handling both analytic (defined by common mathematical formulae) and modeled shapes. It is a type of curve modeling, as opposed to polygonal modeling or digital sculpting. NURBS curves are commonly used in computer-aided design (CAD), manufacturing (CAM), and engineering (CAE). They are part of numerous industry-wide standards, such as IGES, STEP, ACIS, and PHIGS. Tools for creating and editing NURBS surfaces are found in various 3D graphics, rendering, and animation software packages. They can be efficiently handled by computer programs yet allow for easy human interaction. NURBS surfaces are functions of two parameters mapping to a surface in three-dimensional space. The shape of the surface is determined by control points. In a compact form, NURBS surfaces can represent simple geometrical shapes. For complex organic shapes, T-splines and subdivision surfaces are more suitable because they halve the number of control points in comparison with the NURBS surfaces. In general, editing NURBS curves and surfaces is intuitive and predictable. Control points are always either connected directly to the curve or surface, or else act as if they were connected by a rubber band. Depending on the type of user interface, the editing of NURBS curves and surfaces can be via their control points (similar to Bézier curves) or via higher level tools such as spline modeling and hierarchical editing. History Before computers, designs were drawn by hand on paper with various drafting tools. Rulers were used for straight lines, compasses for circles, and protractors for angles. But many shapes, such as the freeform curve of a ship's bow, could not be drawn with these tools. Although such curves could be drawn freehand at the drafting board, shipbuilders often needed a life-size version which could not be done by hand. Such large drawings were done with the help of flexible strips of wood, called splines. The splines were held in place at a number of predetermined points, by lead "ducks", named for the bill-shaped protrusion that the splines rested against. Between the ducks, the elasticity of the spline material caused the strip to take the shape that minimized the energy of bending, thus creating the smoothest possible shape that fit the constraints. The shape could be adjusted by moving the ducks. In 1946, mathematicians started studying the spline shape, and derived the piecewise polynomial formula known as the spline curve or spline function. I. J. Schoenberg gave the spline function its name after its resemblance to the mechanical spline used by draftsmen. As computers were introduced into the design process, the physical properties of such splines were investigated so that they could be modelled with mathematical precision and reproduced where needed. Pioneering work was done in France by Renault engineer Pierre Bézier, and Citroën's physicist and mathematician Paul de Casteljau. They worked nearly parallel to each other, but because Bézier published the results of his work, Bézier curves were named after him, while de Casteljau's name is only associated with related algorithms. NURBS were initially used only in the proprietary CAD packages of car companies. Later they became part of standard computer graphics packages. Real-time, interactive rendering of NURBS curves and surfaces was first made commercially available on Silicon Graphics workstations in 1989. In 1993, the first interactive NURBS modeller for PCs, called NöRBS, was developed by CAS Berlin, a small startup company cooperating with Technische Universität Berlin. Continuity A surface under construction, e.g. the hull of a motor yacht, is usually composed of several NURBS surfaces known as NURBS patches (or just patches). These surface patches should be fitted together in such a way that the boundaries are invisible. This is mathematically expressed by the concept of geometric continuity. Higher-level tools exist that benefit from the ability of NURBS to create and establish geometric continuity of different levels: Positional continuity (G0) holds whenever the end positions of two curves or surfaces coincide. The curves or surfaces may still meet at an angle, giving rise to a sharp corner or edge and causing broken highlights. Tangential continuity (G¹) requires the end vectors of the curves or surfaces to be parallel and pointing the same way, ruling out sharp edges. Because highlights falling on a tangentially continuous edge are always continuous and thus look natural, this level of continuity can often be sufficient. Curvature continuity (G²) further requires the end vectors to be of the same length and rate of length change. Highlights falling on a curvature-continuous edge do not display any change, causing the two surfaces to appear as one. This can be visually recognized as "perfectly smooth". This level of continuity is very useful in the creation of models that require many bi-cubic patches composing one continuous surface. Geometric continuity mainly refers to the shape of the resulting surface; since NURBS surfaces are functions, it is also possible to discuss the derivatives of the surface with respect to the parameters. This is known as parametric continuity. Parametric continuity of a given degree implies geometric continuity of that degree. First- and second-level parametric continuity (C0 and C¹) are for practical purposes identical to positional and tangential (G0 and G¹) continuity. Third-level parametric continuity (C²), however, differs from curvature continuity in that its parameterization is also continuous. In practice, C² continuity is easier to achieve if uniform B-splines are used. The definition of Cn continuity requires that the nth derivative of adjacent curves/surfaces () are equal at a joint. Note that the (partial) derivatives of curves and surfaces are vectors that have a direction and a magnitude; both should be equal. Highlights and reflections can reveal the perfect smoothing, which is otherwise practically impossible to achieve without NURBS surfaces that have at least G² continuity. This same principle is used as one of the surface evaluation methods whereby a ray-traced or reflection-mapped image of a surface with white stripes reflecting on it will show even the smallest deviations on a surface or set of surfaces. This method is derived from car prototyping wherein surface quality is inspected by checking the quality of reflections of a neon-light ceiling on the car surface. This method is also known as "Zebra analysis". Technical specifications A NURBS curve is defined by its order, a set of weighted control points, and a knot vector. NURBS curves and surfaces are generalizations of both B-splines and Bézier curves and surfaces, the primary difference being the weighting of the control points, which makes NURBS curves rational. (Non-rational, aka simple, B-splines are a special case/subset of rational B-splines, where each control point is a regular non-homogenous coordinate [no 'w'] rather than a homogeneous coordinate. That is equivalent to having weight "1" at each control point; Rational B-splines use the 'w' of each control point as a weight.) By using a two-dimensional grid of control points, NURBS surfaces including planar patches and sections of spheres can be created. These are parametrized with two variables (typically called s and t or u and v). This can be extended to arbitrary dimensions to create NURBS mapping . NURBS curves and surfaces are useful for a number of reasons: The set of NURBS for a given order is invariant under affine transformations: operations like rotations and translations can be applied to NURBS curves and surfaces by applying them to their control points. They offer one common mathematical form for both standard analytical shapes (e.g., conics) and free-form shapes. They provide the flexibility to design a large variety of shapes. They reduce the memory consumption when storing shapes (compared to simpler methods). They can be evaluated reasonably quickly by numerically stable and accurate algorithms. Here, NURBS is mostly discussed in one dimension (curves); it can be generalized to two (surfaces) or even more dimensions. Order The order of a NURBS curve defines the number of nearby control points that influence any given point on the curve. The curve is represented mathematically by a polynomial of degree one less than the order of the curve. Hence, second-order curves (which are represented by linear polynomials) are called linear curves, third-order curves are called quadratic curves, and fourth-order curves are called cubic curves. The number of control points must be greater than or equal to the order of the curve. In practice, cubic curves are the ones most commonly used. Fifth- and sixth-order curves are sometimes useful, especially for obtaining continuous higher order derivatives, but curves of higher orders are practically never used because they lead to internal numerical problems and tend to require disproportionately large calculation times. Control points The control points determine the shape of the curve. Typically, each point of the curve is computed by taking a weighted sum of a number of control points. The weight of each point varies according to the governing parameter. For a curve of degree d, the weight of any control point is only nonzero in d+1 intervals of the parameter space. Within those intervals, the weight changes according to a polynomial function (basis functions) of degree d. At the boundaries of the intervals, the basis functions go smoothly to zero, the smoothness being determined by the degree of the polynomial. As an example, the basis function of degree one is a triangle function. It rises from zero to one, then falls to zero again. While it rises, the basis function of the previous control point falls. In that way, the curve interpolates between the two points, and the resulting curve is a polygon, which is continuous, but not differentiable at the interval boundaries, or knots. Higher degree polynomials have correspondingly more continuous derivatives. Note that within the interval the polynomial nature of the basis functions and the linearity of the construction make the curve perfectly smooth, so it is only at the knots that discontinuity can arise. In many applications the fact that a single control point only influences those intervals where it is active is a highly desirable property, known as local support. In modeling, it allows the changing of one part of a surface while keeping other parts unchanged. Adding more control points allows better approximation to a given curve, although only a certain class of curves can be represented exactly with a finite number of control points. NURBS curves also feature a scalar weight for each control point. This allows for more control over the shape of the curve without unduly raising the number of control points. In particular, it adds conic sections like circles and ellipses to the set of curves that can be represented exactly. The term rational in NURBS refers to these weights. The control points can have any dimensionality. One-dimensional points just define a scalar function of the parameter. These are typically used in image processing programs to tune the brightness and color curves. Three-dimensional control points are used abundantly in 3D modeling, where they are used in the everyday meaning of the word 'point', a location in 3D space. Multi-dimensional points might be used to control sets of time-driven values, e.g. the different positional and rotational settings of a robot arm. NURBS surfaces are just an application of this. Each control 'point' is actually a full vector of control points, defining a curve. These curves share their degree and the number of control points, and span one dimension of the parameter space. By interpolating these control vectors over the other dimension of the parameter space, a continuous set of curves is obtained, defining the surface. Knot vector The knot vector is a sequence of parameter values that determines where and how the control points affect the NURBS curve. The number of knots is always equal to the number of control points plus curve degree plus one (i.e. number of control points plus curve order). The knot vector divides the parametric space in the intervals mentioned before, usually referred to as knot spans. Each time the parameter value enters a new knot span, a new control point becomes active, while an old control point is discarded. It follows that the values in the knot vector should be in nondecreasing order, so (0, 0, 1, 2, 3, 3) is valid while (0, 0, 2, 1, 3, 3) is not. Consecutive knots can have the same value. This then defines a knot span of zero length, which implies that two control points are activated at the same time (and of course two control points become deactivated). This has impact on continuity of the resulting curve or its higher derivatives; for instance, it allows the creation of corners in an otherwise smooth NURBS curve. A number of coinciding knots is sometimes referred to as a knot with a certain multiplicity. Knots with multiplicity two or three are known as double or triple knots. The multiplicity of a knot is limited to the degree of the curve; since a higher multiplicity would split the curve into disjoint parts and it would leave control points unused. For first-degree NURBS, each knot is paired with a control point. The knot vector usually starts with a knot that has multiplicity equal to the order. This makes sense, since this activates the control points that have influence on the first knot span. Similarly, the knot vector usually ends with a knot of that multiplicity. Curves with such knot vectors start and end in a control point. The values of the knots control the mapping between the input parameter and the corresponding NURBS value. For example, if a NURBS describes a path through space over time, the knots control the time that the function proceeds past the control points. For the purposes of representing shapes, however, only the ratios of the difference between the knot values matter; in that case, the knot vectors (0, 0, 1, 2, 3, 3) and (0, 0, 2, 4, 6, 6) produce the same curve. The positions of the knot values influences the mapping of parameter space to curve space. Rendering a NURBS curve is usually done by stepping with a fixed stride through the parameter range. By changing the knot span lengths, more sample points can be used in regions where the curvature is high. Another use is in situations where the parameter value has some physical significance, for instance if the parameter is time and the curve describes the motion of a robot arm. The knot span lengths then translate into velocity and acceleration, which are essential to get right to prevent damage to the robot arm or its environment. This flexibility in the mapping is what the phrase non uniform in NURBS refers to. Necessary only for internal calculations, knots are usually not helpful to the users of modeling software. Therefore, many modeling applications do not make the knots editable or even visible. It's usually possible to establish reasonable knot vectors by looking at the variation in the control points. More recent versions of NURBS software (e.g., Autodesk Maya and Rhinoceros 3D) allow for interactive editing of knot positions, but this is significantly less intuitive than the editing of control points. Construction of the basis functions The B-spline basis functions used in the construction of NURBS curves are usually denoted as , in which corresponds to the -th control point, and corresponds with the degree of the basis function. The parameter dependence is frequently left out, so we can write The definition of these basis functions is recursive in . The degree-0 functions are piecewise constant functions. They are one on the corresponding knot span and zero everywhere else. Effectively, is a linear interpolation of and . The latter two functions are non-zero for knot spans, overlapping for knot spans. The function is computed as rises linearly from zero to one on the interval where is non-zero, while falls from one to zero on the interval where is non-zero. As mentioned before, is a triangular function, nonzero over two knot spans rising from zero to one on the first, and falling to zero on the second knot span. Higher order basis functions are non-zero over corresponding more knot spans and have correspondingly higher degree. If is the parameter, and is the th knot, we can write the functions and as and The functions and are positive when the corresponding lower order basis functions are non-zero. By induction on n it follows that the basis functions are non-negative for all values of and . This makes the computation of the basis functions numerically stable. Again by induction, it can be proved that the sum of the basis functions for a particular value of the parameter is unity. This is known as the partition of unity property of the basis functions. The figures show the linear and the quadratic basis functions for the knots {..., 0, 1, 2, 3, 4, 4.1, 5.1, 6.1, 7.1, ...} One knot span is considerably shorter than the others. On that knot span, the peak in the quadratic basis function is more distinct, reaching almost one. Conversely, the adjoining basis functions fall to zero more quickly. In the geometrical interpretation, this means that the curve approaches the corresponding control point closely. In case of a double knot, the length of the knot span becomes zero and the peak reaches one exactly. The basis function is no longer differentiable at that point. The curve will have a sharp corner if the neighbour control points are not collinear. General form of a NURBS curve Using the definitions of the basis functions from the previous paragraph, a NURBS curve takes the following form: In this, is the number of control points and are the corresponding weights. The denominator is a normalizing factor that evaluates to one if all weights are one. This can be seen from the partition of unity property of the basis functions. It is customary to write this as in which the functions are known as the rational basis functions. General form of a NURBS surface A NURBS surface is obtained as the tensor product of two NURBS curves, thus using two independent parameters and (with indices and respectively): with as rational basis functions. Manipulating NURBS objects A number of transformations can be applied to a NURBS object. For instance, if some curve is defined using a certain degree and N control points, the same curve can be expressed using the same degree and N+1 control points. In the process a number of control points change position and a knot is inserted in the knot vector. These manipulations are used extensively during interactive design. When adding a control point, the shape of the curve should stay the same, forming the starting point for further adjustments. A number of these operations are discussed below. Knot insertion As the term suggests, knot insertion inserts a knot into the knot vector. If the degree of the curve is , then control points are replaced by new ones. The shape of the curve stays the same. A knot can be inserted multiple times, up to the maximum multiplicity of the knot. This is sometimes referred to as knot refinement and can be achieved by an algorithm that is more efficient than repeated knot insertion. Knot removal Knot removal is the reverse of knot insertion. Its purpose is to remove knots and the associated control points in order to get a more compact representation. Obviously, this is not always possible while retaining the exact shape of the curve. In practice, a tolerance in the accuracy is used to determine whether a knot can be removed. The process is used to clean up after an interactive session in which control points may have been added manually, or after importing a curve from a different representation, where a straightforward conversion process leads to redundant control points. Degree elevation A NURBS curve of a particular degree can always be represented by a NURBS curve of higher degree. This is frequently used when combining separate NURBS curves, e.g., when creating a NURBS surface interpolating between a set of NURBS curves or when unifying adjacent curves. In the process, the different curves should be brought to the same degree, usually the maximum degree of the set of curves. The process is known as degree elevation. Curvature The most important property in differential geometry is the curvature . It describes the local properties (edges, corners, etc.) and relations between the first and second derivative, and thus, the precise curve shape. Having determined the derivatives it is easy to compute or approximated as the arclength from the second derivative . The direct computation of the curvature with these equations is the big advantage of parameterized curves against their polygonal representations. Example: a circle Non-rational splines or Bézier curves may approximate a circle, but they cannot represent it exactly. Rational splines can represent any conic section—including the circle—exactly. This representation is not unique, but one possibility appears below: The order is three, since a circle is a quadratic curve and the spline's order is one more than the degree of its piecewise polynomial segments. The knot vector is . The circle is composed of four quarter circles, tied together with double knots. Although double knots in a third order NURBS curve would normally result in loss of continuity in the first derivative, the control points are positioned in such a way that the first derivative is continuous. In fact, the curve is infinitely differentiable everywhere, as it must be if it exactly represents a circle. The curve represents a circle exactly, but it is not exactly parametrized in the circle's arc length. This means, for example, that the point at does not lie at (except for the start, middle and end point of each quarter circle, since the representation is symmetrical). This would be impossible, since the x coordinate of the circle would provide an exact rational polynomial expression for , which is impossible. The circle does make one full revolution as its parameter goes from 0 to , but this is only because the knot vector was arbitrarily chosen as multiples of . See also Spline Bézier surface de Boor's algorithm Triangle mesh Point cloud Rational motion Isogeometric analysis References External links Clear explanation of NURBS for non-experts About Nonuniform Rational B-Splines – NURBS TinySpline: Opensource C-library with bindings for various languages Computer-aided design Splines (mathematics) 3D computer graphics Interpolation Multivariate interpolation
Non-uniform rational B-spline
Engineering
4,655
1,343,866
https://en.wikipedia.org/wiki/Model-based%20testing
Model-based testing is an application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a system under test (SUT), or to represent testing strategies and a test environment. The picture on the right depicts the former approach. A model describing a SUT is usually an abstract, partial presentation of the SUT's desired behavior. Test cases derived from such a model are functional tests on the same level of abstraction as the model. These test cases are collectively known as an abstract test suite. An abstract test suite cannot be directly executed against an SUT because the suite is on the wrong level of abstraction. An executable test suite needs to be derived from a corresponding abstract test suite. The executable test suite can communicate directly with the system under test. This is achieved by mapping the abstract test cases to concrete test cases suitable for execution. In some model-based testing environments, models contain enough information to generate executable test suites directly. In others, elements in the abstract test suite must be mapped to specific statements or method calls in the software to create a concrete test suite. This is called solving the "mapping problem". In the case of online testing (see below), abstract test suites exist only conceptually but not as explicit artifacts. Tests can be derived from models in different ways. Because testing is usually experimental and based on heuristics, there is no known single best approach for test derivation. It is common to consolidate all test derivation related parameters into a package that is often known as "test requirements", "test purpose" or even "use case(s)". This package can contain information about those parts of a model that should be focused on, or the conditions for finishing testing (test stopping criteria). Because test suites are derived from models and not from source code, model-based testing is usually seen as one form of black-box testing. Model-based testing for complex software systems is still an evolving field. Models Especially in Model Driven Engineering or in Object Management Group's (OMG's) model-driven architecture, models are built before or parallel with the corresponding systems. Models can also be constructed from completed systems. Typical modeling languages for test generation include UML, SysML, mainstream programming languages, finite machine notations, and mathematical formalisms such as Z, B (Event-B), Alloy or Coq. Deploying model-based testing There are various known ways to deploy model-based testing, which include online testing, offline generation of executable tests, and offline generation of manually deployable tests. Online testing means that a model-based testing tool connects directly to an SUT and tests it dynamically. Offline generation of executable tests means that a model-based testing tool generates test cases as computer-readable assets that can be later run automatically; for example, a collection of Python classes that embodies the generated testing logic. Offline generation of manually deployable tests means that a model-based testing tool generates test cases as human-readable assets that can later assist in manual testing; for instance, a PDF document in a human language describing the generated test steps. Deriving tests algorithmically The effectiveness of model-based testing is primarily due to the potential for automation it offers. If a model is machine-readable and formal to the extent that it has a well-defined behavioral interpretation, test cases can in principle be derived mechanically. From finite-state machines Often the model is translated to or interpreted as a finite-state automaton or a state transition system. This automaton represents the possible configurations of the system under test. To find test cases, the automaton is searched for executable paths. A possible execution path can serve as a test case. This method works if the model is deterministic or can be transformed into a deterministic one. Valuable off-nominal test cases may be obtained by leveraging unspecified transitions in these models. Depending on the complexity of the system under test and the corresponding model the number of paths can be very large, because of the huge amount of possible configurations of the system. To find test cases that can cover an appropriate, but finite, number of paths, test criteria are needed to guide the selection. This technique was first proposed by Offutt and Abdurazik in the paper that started model-based testing. Multiple techniques for test case generation have been developed and are surveyed by Rushby. Test criteria are described in terms of general graphs in the testing textbook. Theorem proving Theorem proving was originally used for automated proving of logical formulas. For model-based testing approaches, the system is modeled by a set of predicates, specifying the system's behavior. To derive test cases, the model is partitioned into equivalence classes over the valid interpretation of the set of the predicates describing the system under test. Each class describes a certain system behavior, and, therefore, can serve as a test case. The simplest partitioning is with the disjunctive normal form approach wherein the logical expressions describing the system's behavior are transformed into the disjunctive normal form. Constraint logic programming and symbolic execution Constraint programming can be used to select test cases satisfying specific constraints by solving a set of constraints over a set of variables. The system is described by the means of constraints. Solving the set of constraints can be done by Boolean solvers (e.g. SAT-solvers based on the Boolean satisfiability problem) or by numerical analysis, like the Gaussian elimination. A solution found by solving the set of constraints formulas can serve as a test cases for the corresponding system. Constraint programming can be combined with symbolic execution. In this approach a system model is executed symbolically, i.e. collecting data constraints over different control paths, and then using the constraint programming method for solving the constraints and producing test cases. Model checking Model checkers can also be used for test case generation. Originally model checking was developed as a technique to check if a property of a specification is valid in a model. When used for testing, a model of the system under test, and a property to test is provided to the model checker. Within the procedure of proofing, if this property is valid in the model, the model checker detects witnesses and counterexamples. A witness is a path where the property is satisfied, whereas a counterexample is a path in the execution of the model where the property is violated. These paths can again be used as test cases. Test case generation by using a Markov chain test model Markov chains are an efficient way to handle Model-based Testing. Test models realized with Markov chains can be understood as a usage model: it is referred to as Usage/Statistical Model Based Testing. Usage models, so Markov chains, are mainly constructed of 2 artifacts : the finite-state machine (FSM) which represents all possible usage scenario of the tested system and the Operational Profiles (OP) which qualify the FSM to represent how the system is or will be used statistically. The first (FSM) helps to know what can be or has been tested and the second (OP) helps to derive operational test cases. Usage/Statistical Model-based Testing starts from the facts that is not possible to exhaustively test a system and that failure can appear with a very low rate. This approach offers a pragmatic way to statically derive test cases which are focused on improving the reliability of the system under test. Usage/Statistical Model Based Testing was recently extended to be applicable to embedded software systems. See also Domain-specific language Domain-specific modeling Model-driven architecture Model-driven engineering Object-oriented analysis and design Time partition testing References Further reading OMG UML 2 Testing Profile; Practical Model-Based Testing: A Tools Approach, Mark Utting and Bruno Legeard, , Morgan-Kaufmann 2007. Model-Based Software Testing and Analysis with C#, Jonathan Jacky, Margus Veanes, Colin Campbell, and Wolfram Schulte, , Cambridge University Press 2008. Model-Based Testing of Reactive Systems Advanced Lecture Series, LNCS 3472, Springer-Verlag, 2005. . A Systematic Review of Model Based Testing Tool Support, Muhammad Shafique, Yvan Labiche, Carleton University, Technical Report, May 2010. 2011/2012 Model-based Testing User Survey: Results and Analysis. Robert V. Binder. System Verification Associates, February 2012 Software testing
Model-based testing
Engineering
1,759
51,516,805
https://en.wikipedia.org/wiki/NGC%20183
NGC 183 is an elliptical galaxy located in the constellation Andromeda. It was discovered on November 5, 1866, by Truman Safford. References External links 0183 Elliptical galaxies Discoveries by Truman Safford Andromeda (constellation) 002298
NGC 183
Astronomy
55
11,421,724
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Z157/R69/R10
In molecular biology, Small nucleolar RNA Z157 (homologous to R69 and R10) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA Z157 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. Plant snoRNA Z157 was identified in screens of Oryza sativa and Arabidopsis thaliana. References External links Small nuclear RNA
Small nucleolar RNA Z157/R69/R10
Chemistry
215
12,364,346
https://en.wikipedia.org/wiki/C6H6O
{{DISPLAYTITLE:C6H6O}} The molecular formula C6H6O (molar mass: 94.11 g/mol, exact mass: 94.0419 u) may refer to: Oxanorbornadiene (OND) Oxepin Phenol, also called carbolic acid or phenolic acid
C6H6O
Chemistry
75
14,964,610
https://en.wikipedia.org/wiki/Seversk%20State%20Technological%20Academy
Seversk State Technological Academy (Russian: Северская государственная технологическая академия) is a post-secondary educational institution in the City of Seversk, Tomsk Oblast, Russia. The school has 1,150 students, of whom 500 study full-time and 75 professors. The rector is Aleksandr Nikolaevich Zhiganov. History The school was founded in 1959 as a branch of the Physics-Technological faculty of Tomsk Polytechnic University. From 1996 to 2001, the school was known as the Seversk Technological Institute of Tomsk Polytechnic University. Later, it became an independent institution called the Seversk State Technological Institute and in 2005, adopted its current name. Academics The school has four main faculties: Management Technology Electrical Engineering and Automation Technological Faculty Continuing Education Foreign relations The school has partnered with a number of foreign universities including University of Dortmund and University of Karlsruhe in Germany, Open University in the United Kingdom, Boston University and Massachusetts Institute of Technology, United States and Politecnico di Milano in Italy. See also List of institutions of higher learning in Russia Education in Siberia References External links University Home Page Universities in Tomsk Oblast Russia Educational institutions established in 1959 1959 establishments in Russia Moscow Engineering Physics Institute
Seversk State Technological Academy
Engineering
279
18,111,385
https://en.wikipedia.org/wiki/Dihydrogen%20cation
The dihydrogen cation or hydrogen molecular ion is a cation (positive ion) with formula H2^+. It consists of two hydrogen nuclei (protons), each sharing a single electron. It is the simplest molecular ion. The ion can be formed from the ionization of a neutral hydrogen molecule (H2) by electron impact. It is commonly formed in molecular clouds in space by the action of cosmic rays. The dihydrogen cation is of great historical, theoretical, and experimental interest. Historically it is of interest because, having only one electron, the equations of quantum mechanics that describe its structure can be solved approximately in a relatively straightforward way, as long as the motion of the nuclei and relativistic and quantum electrodynamic effects are neglected. The first such solution was derived by Ø. Burrau in 1927, just one year after the wave theory of quantum mechanics was published. The theoretical interest arises because an accurate mathematical description, taking into account the quantum motion of all constituents and also the interaction of the electron with the radiation field, is feasible. The description's accuracy has steadily improved over more than half a century, eventually resulting in a theoretical framework allowing ultra-high-accuracy predictions for the energies of the rotational and vibrational levels in the electronic ground state, which are mostly metastable. In parallel, the experimental approach to the study of the cation has undergone a fundamental evolution with respect to earlier experimental techniques used in the 1960s and 1980s. Employing advanced techniques, such as ion trapping and laser cooling, the rotational and vibrational transitions can be investigated in extremely fine detail. The corresponding transition frequencies can be precisely measured and the results can be compared with the precise theoretical predictions. Another approach for precision spectroscopy relies on cooling in a cryogenic magneto-electric trap (Penning trap); here the cations' motion is cooled resistively and the internal vibration and rotation decays by spontaneous emission. Then, electron spin resonance transitions can be precisely studied. These advances have turned the dihydrogen cations into one more family of bound systems relevant for the determination of fundamental constants of atomic and nuclear physics, after the hydrogen atom family (including hydrogen-like ions) and the helium atom family. Physical properties Bonding in H2^+ can be described as a covalent one-electron bond, which has a formal bond order of one half. The ground state energy of the ion is -0.597 Hartree. The bond length in the ground state is 2.00 Bohr radius. Isotopologues The dihydrogen cation has six isotopologues. Each of the two nuclei can be one of the following: proton (p, the most common one), deuteron (d), or triton (t). H2^+=^1H2^+ (dihydrogen cation, the common one) [HD]^+=[^1H^2H]^+ (hydrogen deuterium cation) D2^+=^2H2^+ (dideuterium cation) [HT]^+=[^1H^3H]^+ (hydrogen tritium cation) [DT]^+=[^2H^3H]^+(deuterium tritium cation) T2^+=^3H2^+ (ditritium cation) Quantum mechanical analysis Clamped-nuclei approximation An approximate description of the dihydrogen cation starts with the neglect of the motion of the nuclei - the so-called clamped-nuclei approximation. This is a good approximation because the nuclei (proton, deuteron or triton) are more than a factor 1000 heavier than the electron. Therefore, the motion of the electron is treated first, for a given (arbitrary) nucleus-nucleus distance R. The electronic energy of the molecule E is computed and the computation is repeated for different values of R. The nucleus-nucleus repulsive energy e2/(4ε0R) has to be added to the electronic energy, resulting in the total molecular energy Etot(R). The energy E is the eigenvalue of the Schrödinger equation for the single electron. The equation can be solved in a relatively straightforward way due to the lack of electron–electron repulsion (electron correlation). The wave equation (a partial differential equation) separates into two coupled ordinary differential equations when using prolate spheroidal coordinates instead of cartesian coordinates. The analytical solution of the equation, the wave function, is therefore proportional to a product of two infinite power series. The numerical evaluation of the series can be readily performed on a computer. The analytical solutions for the electronic energy eigenvalues are also a generalization of the Lambert W function which can be obtained using a computer algebra system within an experimental mathematics approach. Quantum chemistry and Physics textbooks usually treat the binding of the molecule in the electronic ground state by the simplest possible ansatz for the wave function: the (normalized) sum of two 1s hydrogen wave functions centered on each nucleus. This ansatz correctly reproduces the binding but is numerically unsatisfactory. Historical notes Early attempts to treat H2^+ using the old quantum theory were published in 1922 by Karel Niessen and Wolfgang Pauli, and in 1925 by Harold Urey. The first successful quantum mechanical treatment of H2^+ was published by the Danish physicist Øyvind Burrau in 1927, just one year after the publication of wave mechanics by Erwin Schrödinger. In 1928, Linus Pauling published a review putting together the work of Burrau with the work of Walter Heitler and Fritz London on the hydrogen molecule. The complete mathematical solution of the electronic energy problem for in the clamped-nuclei approximation was provided by Wilson (1928) and Jaffé (1934). Johnson (1940) gives a succinct summary of their solution. The solutions of the clamped-nuclei Schrödinger equation The electronic Schrödinger wave equation for the hydrogen molecular ion with two fixed nuclear centers, labeled A and B, and one electron can be written as where V is the electron-nuclear Coulomb potential energy function: and E is the (electronic) energy of a given quantum mechanical state (eigenstate), with the electronic state function ψ = ψ(r) depending on the spatial coordinates of the electron. An additive term , which is constant for fixed internuclear distance R, has been omitted from the potential V, since it merely shifts the eigenvalue. The distances between the electron and the nuclei are denoted ra and rb. In atomic units (ħ = m = e = 4ε0 = 1) the wave equation is We choose the midpoint between the nuclei as the origin of coordinates. It follows from general symmetry principles that the wave functions can be characterized by their symmetry behavior with respect to the point group inversion operation i (r ↦ −r). There are wave functions ψg(r), which are symmetric with respect to i, and there are wave functions ψu(r), which are antisymmetric under this symmetry operation: The suffixes g and u are from the German gerade and ungerade) occurring here denote the symmetry behavior under the point group inversion operation i. Their use is standard practice for the designation of electronic states of diatomic molecules, whereas for atomic states the terms even and odd are used. The ground state (the lowest state) of H2^+ is denoted X2Σ or 1sσg and it is gerade. There is also the first excited state A2Σ (2pσu), which is ungerade. Asymptotically, the (total) eigenenergies Eg/u for these two lowest lying states have the same asymptotic expansion in inverse powers of the internuclear distance R: This and the energy curves include the internuclear 1/R term. The actual difference between these two energies is called the exchange energy splitting and is given by: which exponentially vanishes as the internuclear distance R gets greater. The lead term was first obtained by the Holstein–Herring method. Similarly, asymptotic expansions in powers of have been obtained to high order by Cizek et al. for the lowest ten discrete states of the hydrogen molecular ion (clamped nuclei case). For general diatomic and polyatomic molecular systems, the exchange energy is thus very elusive to calculate at large internuclear distances but is nonetheless needed for long-range interactions including studies related to magnetism and charge exchange effects. These are of particular importance in stellar and atmospheric physics. The energies for the lowest discrete states are shown in the graph above. These can be obtained to within arbitrary accuracy using computer algebra from the generalized Lambert W function (see eq. (3) in that site and reference). They were obtained initially by numerical means to within double precision by the most precise program available, namely ODKIL. The red solid lines are 2Σ states. The green dashed lines are 2Σ states. The blue dashed line is a 2Πu state and the pink dotted line is a 2Πg state. Note that although the generalized Lambert W function eigenvalue solutions supersede these asymptotic expansions, in practice, they are most useful near the bond length. The complete Hamiltonian of (as for all centrosymmetric molecules) does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u electronic states (called ortho-para mixing) and give rise to ortho-para transitions. Born-Oppenheimer approximation Once the energy function Etot(R) has been obtained, one can compute the quantum states of rotational and vibrational motion of the nuclei, and thus of the molecule as a whole. The corresponding 'nuclear' Schrödinger equation is a one-dimensional ordinary differential equation, where the nucleus-nucleus distance R is the independent coordinate. The equation describes the motion of a fictitious particle of mass equal to the reduced mass of the two nuclei, in the potential Etot(R)+VL(R), where the second term is the centrifugal potential due to rotation with angular momentum described by the quantum number L. The eigenenergies of this Schrödinger equation are the total energies of the whole molecule, electronic plus nuclear. High-accuracy ab initio theory The Born-Oppenheimer approximation is unsuited for describing the dihydrogen cation accurately enough to explain the results of precision spectroscopy. The full Schrödinger equation for this cation, without the approximation of clamped nuclei, is much more complex, but nevertheless can be solved numerically essentially exactly using a variational approach. Thereby, the simultaneous motion of the electron and of the nuclei is treated exactly. When the solutions are restricted to the lowest-energy orbital, one obtains the rotational and ro-vibrational states' energies and wavefunctions. The numerical uncertainty of the energies and the wave functions found in this way is negligible compared to the systematic error stemming from using the Schrödinger equation, rather than fundamentally more accurate equations. Indeed, the Schrödinger equation does not incorporate all relevant physics, as is known from the hydrogen atom problem. More accurate treatments need to consider the physics that is described by the Dirac equation or, even more accurately, by quantum electrodynamics. The most accurate solutions of the ro-vibrational states are found by applying non-relativistic quantum electrodynamics (NRQED) theory. For comparison with experiment, one requires differences of state energies, i.e. transition frequencies. For transitions between ro-vibrational levels having small rotational and moderate vibrational quantum numbers the frequencies have been calculated with theoretical fractional uncertainty of approximately . Additional contributions to the uncertainty of the predicted frequencies arise from the uncertainties of fundamental constants, which are input to the theoretical calculation, especially from the ratio of the proton mass and the electron mass. Using a sophisticated ab initio formalism, also the hyperfine energies can be computed accurately, see below. Experimental studies Precision spectroscopy Because of its relative simplicity, the dihydrogen cation is the molecule that is most precisely understood, in the sense that theoretical calculations of its energy levels match the experimental results with the highest level of agreement. Specifically, spectroscopically determined pure rotational and ro-vibrational transition frequencies of the particular isotopologue HD^+ agree with theoretically computed transition frequencies. Four high-precision experiments yielded comparisons with total uncertainties between and , fractionally. The level of agreement is actually limited neither by theory not by experiment but rather by the uncertainty of the current values of the masses of the particles, that are used as input parameters to the calculation. In order to measure the transition frequencies with high accuracy, the spectroscopy of the dihydrogen cation had to be performed under special conditions. Therefore, ensembles of HD^+ ions were trapped in a quadrupole ion trap under ultra-high vacuum, sympathetically cooled by laser-cooled beryllium ions, and interrogated using particular spectroscopic techniques. The hyperfine structure of the homonuclear isotopologue H2^+ has been measured extensively and precisely by Jefferts in 1969. Finally, in 2021, ab initio theory computations were able to provide the quantitative details of the structure with uncertainty smaller than that of the experimental data, . Some contributions to the measured hyperfine structure have been theoretically confirmed at the level of approximately . The implication of these agreements is that one can deduce a spectroscopic value of the ratio of electron mass to reduced proton-deuteron mass, +, that is an input to the ab initio theory. The ratio is fitted such that theoretical prediction and experimental results agree. The uncertainty of the obtained ratio is comparable to the one obtained from direct mass measurements of proton, deuteron, electron, and HD+ via cyclotron resonance in Penning traps. Occurrence in space Formation The dihydrogen ion is formed in nature by the interaction of cosmic rays and the hydrogen molecule. An electron is knocked off leaving the cation behind. H2\ + cosmic\ ray -> H2^+\ + e^-\ + cosmic\ ray Cosmic ray particles have enough energy to ionize many molecules before coming to a stop. The ionization energy of the hydrogen molecule is 15.603 eV. High speed electrons also cause ionization of hydrogen molecules with a peak cross section around 50 eV. The peak cross section for ionization for high speed protons is with a cross section of . A cosmic ray proton at lower energy can also strip an electron off a neutral hydrogen molecule to form a neutral hydrogen atom and the dihydrogen cation, (\\p^+ + H2 -> H + H2^+) with a peak cross section at around of . Destruction In nature the ion is destroyed by reacting with other hydrogen molecules: H2^+ + H2 -> H3^+ + H Production in the laboratory In the laboratory, the ion is easily produced by electron bombardment from an electron gun. An artificial plasma discharge cell can also produce the ion. See also Symmetry of diatomic molecules Dirac Delta function model (one-dimensional version of ) Di-positronium Euler's three-body problem (classical counterpart) Few-body systems Helium atom Helium hydride ion Trihydrogen cation Triatomic hydrogen Lambert W function Molecular astrophysics Holstein–Herring method Three-body problem List of quantum-mechanical systems with analytical solutions References Hydrogen physics Cations Quantum chemistry Quantum models
Dihydrogen cation
Physics,Chemistry
3,267
53,765,068
https://en.wikipedia.org/wiki/Integration%20Driven%20Development
Integration Driven Development (IDD) is an incremental approach to systems development where the contents of the increments are determined by the integration plan, rather than the opposite. The increments can be seen as defined system capability changes - "Deltas" (Taxén et al., 2011). The advantages compared to other incremental development models ( such as RUP and Scrum) still apply, such as short design cycles, early testing and managing late requirement changes, however IDD adds pull to the concept and also has the advantage of optimizing the contents of each increment to allow early integration and testing. Pull from integration and testing Pull, in this context, means that information is requested from the user when needed (or is planned to be integrated and tested), as opposed to delivered when it happens to be ready. Development planning has to adjust to the optimal order of integration. System implementation is driven by what is going to be integrated and tested. System design, in turn is driven by the planned implementation and requirements by the planned system design steps. By doing so, artifacts will be delivered just-in-time, thus enabling fast feedback. Advantages and Limitations IDD is not used instead of other incremental models, but rather as an enhancement that will make those models more efficient. One obstacle when using IDD is to create the integration plan – the definition of what to develop and integrate at a given time. One way that has proven successful is to use System Anatomies for original planning and Integration Anatomies for re-planning and follow-up. Since all planning will require time and resources IDD may be considered unnecessary for development with low complexity of the system and organization (i.e., small teams developing small systems). Further reading Lilliesköld, J., Taxén, L., Karlsson, M., & Klasson, M. (2005). Managing complex development projects – using the system anatomy. In Proceedings Portland International Conference on Management of Technology and Engineering, PICMET ’05, July 31 – Aug 4th, 2005, Portland, Oregon – USA. Taxén L et al., The System Anatomy: Enabling Agile Project Management, Studentlitteratur, (2011). Adler, N. (1999). Managing Complex Product Development – Three approaches. EFI, Stockholm School of Economics. Berggren, C., Järkvik, J., & Söderlund, J. (2008). Lagomizing, organic integration, and systems emergency wards: Innovative practices in managing complex systems development projects. Project Management Journal, Supplement, 39, 111–122 Taxén L, Lilliesköld J (2005) Manifesting Shared Affordances in System Development – the System Anatomy, ALOIS*2005, The 3rd International Conference on Action in Language, Organisations and Information Systems, 15–16 March 2005, Limerick, Ireland, pp. 28–47. Retrieved from https://web.archive.org/web/20160303202022/http://www.alois2005.ul.ie/ (Feb 2006). Järkvik, J., Berggren, C., & Söderlund, J. (2007). Innovation in project management: A neo-realistic approach to time-critical complex systems development. IRNOP VIII Conference, Brighton, UK, September 19–21, 2007 Jönsson, P. (2006). The Anatomy-An Instrument for Managing Software Evolution and Evolvability. Second International IEEE Workshop on Software Evolvability (SE'06) (pp. 31–37). Philadelphia, Pennsylvania, USA. September 24, 2006. Taxén, L., & Lilliesköld, J. (2008). Images as action instruments in complex projects, International Journal of Project Management, 26(5), 527–536 Taxén, L., & Petterson, U. (2010). Agile and Incremental Development of Large Systems. In The 7th European Systems Engineering Conference, EuSEC 2010. Stockholm, Sweden, May 23–26, 2010 Söderlund, J. (2002). Managing complex development projects: arenas, knowledge processes and time. R&D Management, 32(5), 419–430. Product development Systems engineering
Integration Driven Development
Engineering
893
2,634,195
https://en.wikipedia.org/wiki/Electrical%20ballast
An electrical ballast is a device placed in series with a load to limit the amount of current in an electrical circuit. A familiar and widely used example is the inductive ballast used in fluorescent lamps to limit the current through the tube, which would otherwise rise to a destructive level due to the negative differential resistance of the tube's voltage-current characteristic. Ballasts vary greatly in complexity. They may be as simple as a resistor, inductor, or capacitor (or a combination of these) wired in series with the lamp; or as complex as the electronic ballasts used in compact fluorescent lamps (CFLs). Current limiting An electrical ballast is a device that limits the current through an electrical load. These are most often used when a load (such as an arc discharge) has its terminal voltage decline when current through the load increases. If such a device were connected to a constant-voltage power supply, it would draw an increasing amount of current until it is destroyed or causes the power supply to fail. To prevent this, a ballast provides a positive resistance or reactance that limits the current. The ballast provides for the proper operation of the negative-resistance device by limiting current. Ballasts can also be used simply to limit the current in an ordinary, positive-resistance circuit. Prior to the advent of solid-state ignition, automobile ignition systems commonly included a ballast resistor to regulate the voltage applied to the ignition system. Resistors Fixed resistors For simple, low-powered loads such as a neon lamp, a fixed resistor is commonly used. Because the resistance of the ballast resistor is large it determines the current in the circuit, even in the face of negative resistance introduced by the neon lamp. Ballast was also a component used in early model automobile engines that lowered the supply voltage to the ignition system after the engine had been started. Starting the engine requires a significant amount of electrical current from the battery, resulting in an equally significant voltage drop. To allow the engine to start, the ignition system was designed to operate on this lower voltage. But once the vehicle was started and the starter disengaged, the normal operating voltage was too high for the ignition system. To avoid this problem, a ballast resistor was inserted in series with the ignition system, resulting in two different operating voltages for the starting and ignition systems. Occasionally, this ballast resistor would fail and the classic symptom of this failure was that the engine ran while being cranked (while the resistor was bypassed) but stalled immediately when cranking ceased (and the resistor was reconnected in the circuit via the ignition switch). Modern electronic ignition systems (those used since the 1980s or late '70s) do not require a ballast resistor as they are flexible enough to operate on the lower cranking voltage or the normal operating voltage. Another common use of a ballast resistor in the automotive industry is adjusting the ventilation fan speed. The ballast is a fixed resistor with usually two center taps, and the fan speed selector switch is used to bypass portions of the ballast: all of them for full speed, and none for the low speed setting. A very common failure occurs when the fan is being constantly run at the next-to-full speed setting (usually 3 out of 4). This will cause a very short piece of resistor coil to be operated with a relatively high current (up to 10 A), eventually burning it out. This will render the fan unable to run at the reduced speed settings. In some consumer electronic equipment, notably in television sets in the era of valves (vacuum tubes), but also in some low-cost record players, the vacuum tube heaters were connected in series. Since the voltage drop across all the heaters in series was usually less than the full mains voltage, it was necessary to provide a ballast to drop the excess voltage. A resistor was often used for this purpose, as it was cheap and worked with both alternating current (AC) and direct current (DC). Self-variable resistors Some ballast resistors have the property of increasing in resistance as current through them increases, and decreasing in resistance as current decreases. Physically, some such devices are often built quite like incandescent lamps. Like the tungsten filament of an ordinary incandescent lamp, if current increases, the ballast resistor gets hotter, its resistance goes up, and its voltage drop increases. If current decreases, the ballast resistor gets colder, its resistance drops, and the voltage drop decreases. Therefore, the ballast resistor reduces variations in current, despite variations in applied voltage or changes in the rest of an electric circuit. These devices are sometimes called "barretters" and were used in the series heating circuits of 1930s to 1960s AC/DC radio and TV home receivers. This property can lead to more precise current control than merely choosing an appropriate fixed resistor. The power lost in the resistive ballast is also reduced because a smaller portion of the overall power is dropped in the ballast compared to what might be required with a fixed resistor. Earlier, household clothes dryers sometimes incorporated a germicidal lamp in series with an ordinary incandescent lamp; the incandescent lamp operated as the ballast for the germicidal lamp. A commonly used light in the home in the 1960s in 220–240 V countries was a circular tube ballasted by an under-run regular mains filament lamp. Self ballasted mercury-vapor lamps incorporate ordinary tungsten filaments within the overall envelope of the lamp to act as the ballast, and to partially compensate for the red-deficient light produced by the mercury vapor process . Reactive ballasts An inductor, usually a choke, is very common in line-frequency ballasts to provide the proper starting and operating electrical condition to power a fluorescent lamp or a high intensity discharge lamp. (Because of the use of the inductor, such ballasts are usually called magnetic ballasts.) The inductor has two benefits: Its reactance limits the power available to the lamp with only minimal power losses in the inductor The voltage spike produced when current through the inductor is rapidly interrupted is used in some circuits to first strike the arc in the lamp. A disadvantage of the inductor is that current is shifted out of phase with the voltage, producing a poor power factor. In more expensive ballasts, a capacitor is often paired with the inductor to correct the power factor. In autotransformer ballasts that control two or more lamps, line-frequency ballasts commonly use different phase relationships between the multiple lamps. This not only mitigates the flicker of the individual lamps, it also helps maintain a high power factor. These ballasts are often called lead-lag ballasts because the current in one lamp leads the mains phase and the current in the other lamp lags the mains phase. In most 220-240V ballasts, the capacitor isn't incorporated inside the ballast like in North American ballasts, but is wired in parallel or in series with the ballast. In Europe, and most 220-240 V territories, the line voltage is sufficient to start lamps over 30W with a series inductor. In North America and Japan however, the line voltage (120 V or 100 V respectively) may not be sufficient to start lamps over 30 W with a series inductor, so an autotransformer winding is included in the ballast to step up the voltage. The autotransformer is designed with enough leakage inductance (short-circuit inductance) so that the current is appropriately limited. Because of the large size inductors and capacitors that must be used as well as the heavy iron core of the inductor, reactive ballasts operated at line frequency tend to be large and heavy. They commonly also produce acoustic noise (line-frequency hum). Prior to 1980 in the United States, polychlorinated biphenyl (PCB)-based oils were used as an insulating oil in many ballasts to provide cooling and electrical isolation (see Transformer oil). Electronic ballasts An electronic ballast uses solid state electronic circuitry to provide the proper starting and operating electrical conditions to power discharge lamps. An electronic ballast can be smaller and lighter than a comparably rated magnetic one. An electronic ballast is usually quieter than a magnetic one, which produces a line-frequency hum by vibration of the core laminations. Electronic ballasts are often based on switched-mode power supply (SMPS) topology, first rectifying the input power and then chopping it at a high frequency. Advanced electronic ballasts may allow dimming via pulse-width modulation or via changing the frequency to a higher value. Ballasts incorporating a microcontroller (digital ballasts) may offer remote control and monitoring via networks such as LonWorks, Digital Addressable Lighting Interface (DALI), DMX512, Digital Serial Interface (DSI) or simple analog control using a 0-10 V DC brightness control signal. Systems with remote control of light level via a wireless mesh network have been introduced. Electronic ballasts usually supply power to the lamp at a frequency of or higher, rather than the mains frequency of ; this substantially eliminates the stroboscopic effect of flicker, a product of the line frequency associated with fluorescent lighting (see photosensitive epilepsy). The high output frequency of an electronic ballast refreshes the phosphors in a fluorescent lamp so rapidly that there is no perceptible flicker. The flicker index, used for measuring perceptible light modulation, has a range from 0.00 to 1.00, with 0 indicating the lowest possibility of flickering and 1 indicating the highest. Lamps operated on magnetic ballasts have a flicker index between 0.04 and 0.07 while digital ballasts have a flicker index of below 0.01. Because more gas remains ionized in the arc stream, the lamp operates at about 9% higher efficacy above approximately 10 kHz. Lamp efficiency increases sharply at about 10 kHz and continues to improve until approximately 20 kHz. Electronic ballast retrofits to existing street lights had been tested in some Canadian provinces circa 2012; since then LED retrofits have become more common. With the higher efficiency of the ballast itself and the higher lamp efficacy at higher frequency, electronic ballasts offer higher system efficacy for low pressure lamps like the fluorescent lamp. For HID lamps, there is no improvement of the lamp efficacy in using higher frequency. More than this: HID lamps like the metal halide lamps and high pressure sodium lamps have reduced reliability when operated at high frequencies in the range of , due to acoustic resonance; for these lamps a square wave low frequency current drive is mostly used with frequency in the range of , with the same advantage of lower light depreciation. Most newer generation electronic ballasts can operate both high pressure sodium (HPS) lamps as well as metal-halide lamps. The ballast initially works as a starter for the arc by its internal ignitor, supplying a high-voltage impulse and, later, it works as a limiter/regulator of the electric flow inside the circuit. Electronic ballasts also run much cooler and are lighter than their magnetic counterparts. Fluorescent lamp ballast topologies Preheating This technique uses a combination filament–cathode at each end of the lamp in conjunction with a mechanical or automatic (bi-metallic or electronic) switch that initially connect the filaments in series with the ballast to preheat them. When filaments are disconnected, an inductive pulse from the ballast starts the lamp. This system is described as "Preheat" in North America and "Switch Start" in the UK, and has no specific description in the rest of the world. This system is common in 200–240 V countries (and for 100–120 V lamps up to about 30 watts). Although an inductive pulse makes it more likely that the lamp will start when the starter switch opens, it is not actually necessary. The ballast in such systems can equally be a resistor. A number of fluorescent lamp fittings used a filament lamp as the ballast in the late 1950s through to the 1960s. Special lamps were manufactured that were rated at 170 volts and 120 watts. The lamp had a thermal starter built into the 4 pin base. The power requirements were much larger than using an inductive ballast (though the consumed current was the same), but the warmer light from the lamp type of ballast was often preferred by users particularly in a domestic environment. Resistive ballasts were the only type that was usable when the only supply available to power the fluorescent lamp was DC. Such fittings used the thermal type of starter (mostly because they had gone out of use long before the glow starter was invented), but it was possible to include a choke in the circuit whose sole purpose was to provide a pulse on opening of the starter switch to improve starting. DC fittings were complicated by the need to reverse the polarity of the supply to the tube each time it started. Failure to do so vastly shortened the life of the tube. Instant start An instant start ballast does not preheat the electrodes, instead using a relatively high voltage (~600 V) to initiate the discharge arc. It is the most energy efficient type, but yields the fewest lamp-start cycles, as material is blasted from the surface of the cold electrodes each time the lamp is turned on. Instant-start ballasts are best suited to applications with long duty cycles, where the lamps are not frequently turned on and off. Although these were mostly used in countries with 100-120 volt mains supplies (for lamps of 40 W or above), they were briefly popular in other countries because the lamp started without the flicker of switch start systems. The popularity was short lived because of the short lamp life. Rapid start A rapid start ballast always heats the lamp electrodes using the same heating power, before, during and after lamp starting, by using a heating transformer coil. It provides longer lamp life and more cycle life than instant start, but have very high ballast losses compared to other types of ballasts, as the electrodes in each end of the lamp continue to consume heating power as the lamp operates. Again, although popular in the United States and Canada for lamps of 40 W and above, rapid start is sometimes used in other countries particularly where the flicker of switch start systems is undesirable. Some American electronic fluorescent lamp ballasts which are labeled "Rapid start" are otherwise completely different than the classical American rapid start ballast, because they use resonance to start the lamp and heat the cathodes, and don't supply all the time the same heating power regardless the lamp conditions. Dimmable ballast A dimmable ballast is very similar to a rapid start ballast, except that the autotransformer is connected to a dimmer. A quadrac type light dimmer can be used with a dimming ballast, which maintains the heating current while allowing lamp current to be controlled. A resistor of about 10 kΩ is required to be connected in parallel with the fluorescent tube to allow reliable firing of the quadrac at low light levels. There are dimmable electronic ballast that uses 1-10V or DALI interfaces to dim the lamp. Emergency An electronic ballast with an integrated rechargeable battery is designed to provide emergency egress lighting in the event of a power failure. It can be incorporated into an existing fluorescent light fixture or mounted remotely outside of it. When power is lost, the ballast will illuminate one or more lamps in the fixture at a reduced output for a minimum of 90 minutes (as required by code). These can be used as an alternative to egress lighting powered by a back-up electrical generator. Hybrid A hybrid ballast has a magnetic core-and-coil transformer and an electronic switch for the electrode-heating circuit. Like a magnetic ballast, a hybrid unit operates at line power frequency—50 Hz in Europe, for example. These types of ballasts, which are also referred to as cathode-disconnect ballasts, disconnect the electrode-heating circuit after they start the lamps. ANSI ballast factor For a lighting ballast, the ANSI ballast factor is used in North America to compare the light output (in lumens) of a lamp operated on a ballast compared to the lamp operating on an ANSI reference ballast. Reference ballast operates the lamp at its ANSI specified nominal power rating. The ballast factor of practical ballasts must be considered in lighting design; a low ballast factor may save energy, but will produce less light and short the lamp life. With fluorescent lamps, ballast factor can vary from the reference value of 1.0. Ballast triode Early tube-based color TV sets used a ballast triode, such as the PD500, as a parallel shunt stabilizer for the cathode-ray tube (CRT) acceleration voltage, to keep the CRT's deflection factor constant. See also Iron-hydrogen resistor Sodium lamp References External links Gas discharge lamps Analog circuits Resistive components Electrical power control Electric power systems components
Electrical ballast
Physics,Engineering
3,516
46,432,477
https://en.wikipedia.org/wiki/Women%20in%20chemistry
This is a list of women chemists. It should include those who have been important to the development or practice of chemistry. Their research or application has made significant contributions in the area of basic or applied chemistry. Nobel Laureates 2022 - Carolyn R. Bertozzi - for Bioorthogonal chemistry 2020 – Emmanuelle Charpentier and Jennifer Doudna – for CRISPR gene editing 2018 – Frances Arnold – directed evolution to engineer enzymes 2009 – Ada E. Yonath – structure & function of the ribosome 1964 – Dorothy Crowfoot Hodgkin – protein crystallography 1935 – Irène Joliot-Curie – artificial radioactivity 1911 – Marie Sklodowska-Curie – discovery of radium & polonium Eight women have won the Nobel Prize in Chemistry (listed above), awarded annually since 1901 by the Royal Swedish Academy of Sciences. Marie Curie was the first woman to receive the prize in 1911, which was her second Nobel Prize (she also won the prize in physics in 1903, along with Pierre Curie and Henri Becquerel – making her the only woman to be award two Nobel prizes). Her prize in chemistry was for her "discovery of the elements radium and polonium, by the isolation of radium and the study of the nature and compounds of this remarkable element." Irene Joliot-Curie, Marie's daughter, became the second woman to be awarded this prize in 1935 for her discovery of artificial radioactivity. Dorothy Hodgkin won the prize in 1964 for the development of protein crystallography. Among her significant discoveries are the structures of penicillin and vitamin B12. Forty five years later, Ada Yonath shared the prize with Venkatraman Ramakrishnan and Thomas A. Steitz for the study of the structure and function of the ribosome. Emmanuelle Charpentier and Jennifer A Doudna won the 2020 prize in chemistry “for the development of a method for genome editing.” Charpentier and Doudna are the first women to share the Nobel Prize in chemistry. Wolf Laureates Three women have been awarded the Wolf Prize in Chemistry, they are: 2006 – Ada Yonath "for ingenious structural discoveries of the ribosomal machinery of peptide-bond formation and the light-driven primary processes in photosynthesis. 2022 – Bonnie L. Bassler and Carolyn R. Bertozzi "for their seminal contributions to understanding the chemistry of cellular communication and inventing chemical methodologies to study the role of carbohydrates, lipids, and proteins in such biological processes." Chemical elements In the periodic table of elements, two chemical elements are named after a female scientist: Curium (element 96), named after Marie and Pierre Curie Meitnerium (element 109), named after Lise Meitner List of women chemists The following list is split into the centuries when the majority of the scientist's work was performed. The scientist's listed may be born and perform work outside of the century they are listed under. 19th century Mary Watson (1856–1933), one of the first two female chemistry students at the University of Oxford Margaret Seward (1864–1929), one of the first two female chemistry students at the University of Oxford; signed the 1904 petition to the Chemical Society Vera Bogdanovskaia (1868–1897), one of the first female Russian chemists Gerty Cori (1896–1957) Jewish Czech-American biochemist who was the first American to win a Nobel Prize in science Margot Dorenfeldt (1895–1986) First woman to graduate from Norwegian Institute of Technology (1919) Ida Freund (1863–1914), first woman to be a university chemistry lecturer in the United Kingdom Ellen Gleditsch (1879–1968), Norwegian radiochemist; Norway's second female professor Louise Hammarström (1849–1917), Swedish mineral chemist, first formally educated female Swedish chemist Edith Humphrey (1875–1978), Inorganic chemist, probably the first British woman to gain a doctorate in chemistry Julia Lermontova (1846–1919), Russian chemist, first Russian female doctorate in chemistry Laura Linton (1853–1915), American chemist, teacher, and physician Rachel Lloyd (1839–1900), First American female to earn a doctorate in chemistry, first regularly admitted female member of the American Chemical Society, studied sugar beets Muriel Wheldale Onslow (1880–1932), British biochemist Marie Pasteur (1826–1910), French chemist and bacteriologist Mary Engle Pennington (1872–1952), American chemist Agnes Pockels (1862–1935), German chemist Anna Sundström (1785–1871), Swedish chemist Clara Immerwahr (1870–1915), First woman to get her doctorate in chemistry in Germany Ellen Swallow Richards (1842–1911), American industrial and environmental chemist Anna Volkova (1800–1876), Russian chemist Nadezhda Olimpievna Ziber-Shumova (died 1914), Russian chemist Fanny Rysan Mulford Hitchcock (1851–1936), one of thirteen (American) women to graduate with a degree in chemistry in the 1800s, and the first to graduate with a doctorate in philosophy of chemistry. Her areas of focus were in entomology, fish osteology, and plant pathology. 20th century Elly Agallidis (1914–2006), Greek physical chemist Nancy Allbritton, American analytical and biochemist Valerie Ashby, American chemist Barbara Askins (born 1939), American chemist Kim K. Baldridge, American computational chemist Alice Ball (1892–1916), American chemist Carolyn Bertozzi (born 1966), American biochemist Cynthia Burrows, American physical organic chemist Asima Chatterjee (1917–2006), Indian organic chemist Astrid Cleve (1875–1968), Swedish chemist Mildred Cohn (1913–2009), American chemist Janine Cossy (born 1950), French organic chemist Maria Skłodowska-Curie (1867–1934), Polish-French physicist and chemist (discoverer of polonium and radium, pioneer in radiology); Nobel laureate in physics 1903, and in chemistry 1911 Jillian Lee Dempsey (born 1983), American chemist Vy M. Dong, American organic chemist Abigail Doyle (born 1980), American organic chemist Odile Eisenstein (born 1949), French, theoretical chemist Gertrude B. Elion (1918–1999), American biochemist (Nobel prize in Physiology or Medicine 1988 for drug development) Margaret Faul, Irish/American organic chemist Mary Peters Fieser (1909–1997), American organic chemist Marye Anne Fox (1947–2021), American physical organic chemist Rosalind Franklin (1920–1957), British physical chemist and crystallographer Helen Murray Free (1923–2021), American chemist Gunda I. Georg, German-trained medicinal chemist, professor of medicinal chemistry in the US Ellen Gleditsch (1879–1968), Norwegian radiochemist Paula T. Hammond(1963-), American chemical engineer, MIT professor Anna J. Harrison (1912–1998), American organic chemist Remziye Hisar (1902–1992), Turkish chemist, first woman chemist of Turkey Darleane C. Hoffman (born 1926), American Nuclear chemist Icie Hoobler (1892–1984), American biochemist Dorothy Crowfoot Hodgkin (1910–1994), British crystallographer, Nobel prize in chemistry 1964 Donna M. Huryn, American organic chemist Clara Immerwahr (1870–1915), German chemist Allene Rosalind Jeanes (1906–1995), American organic chemist Malika Jeffries-EL, American organic chemist Irène Joliot-Curie (1897–1956), French chemist and nuclear physicist, Nobel Prize in Chemistry 1935 Madeleine M. Joullié (born 1927), Brazilian, American organic chemist Isabella Karle (1921–2017), American crystallographer Joyce Jacobson Kaufman (1929–2016), American chemist, Pharmacologist Judith Klinman (born 1941), American biochemist Teresa Kowalska (1946-2023), Polish chemist, co-founder of Acta Chromatographia Marisa Kozlowski, American organic chemist Stephanie Kwolek (1923–2014), American chemist, inventor of Kevlar Kathleen Lonsdale (1903–1971), British crystallographer Yvonne Connolly Martin (born 1936), American physical biochemist working on cheminformatics and computer-aided drug design in the US Marie Marynard Daly (1921–2001), First African American woman to earn her PhD in the United States Cynthia A. Maryanoff (born 1949), American organic/medicinal chemist Maud Menten (1879–1960), Canadian biochemist Helen Vaughn Michel (born 1932), American nuclear chemist Alexandra Navrotsky (born 1943), American geochemist Dorothy Virginia Nightingale (1902–2000), American organic chemist Yolanda Ortiz (chemist) (1924–2019), Argentine chemist, environmentalist Kathlyn Parker, American organic chemist Emma Parmee, British-born medicinal/organic chemist Mary Engle Pennington (1872–1952), American food chemist Eva Philbin (1914–2005), Irish chemist Iphigenia Photaki (1921–1983), Greek organic chemist Darshan Ranganathan (1941–2001), Indian organic chemist Mildred Rebstock (1919–2011), American Pharmaceutical chemist Sibyl Martha Rock (1909–1981), American pioneer in mass spectrometry and computing Elizabeth Rona (1890–1981), Hungarian (naturalized American) nuclear chemist and polonium expert Mary Swartz Rose (1874–1941), Nutrition chemist Melanie Sanford (born 1975), American organic chemist Maxine L. Savitz, American Chemist Patsy Sherman (1930–2008), American chemist, co-inventor of Scotchgard Odette L. Shotwell (1922–1998), organic chemist Jean'ne Shreeve (born 1933), American organic chemist Dorothy Martin Simon (1919–2016), American physical chemist Susan Solomon (born 1956), Atmospheric chemist JoAnne Stubbe (born 1946), American biochemist Ida Noddack Tacke (1896–1978), German chemist and physicist Tsippy Tamiri (1952-2017), Israeli chemist Giuliana Tesoro (1921–2002), Polymer chemist Margaret Thatcher (1925–2013), British chemist and Prime Minister Jean Thomas, British biochemist (chromatin) Martha J. B. Thomas (1926–2006), Analytical chemist and chemical engineer Ann E. Weber, American organic/medicinal chemist Karen Wetterhahn (1948–1997), American metal toxicologist Ruth R. Wexler (born 1955), American organic and medicinal chemist, discoverer of two marketed drugs M. Christina White (born 1970), American organometallic chemist Charlotte Williams, English inorganic chemist Angela K. Wilson, American computational, theoretical, and physical chemist Rosalyn Sussman Yalow (1921–2011), American biochemist Jenara Vicenta Arnal Yarza (1902–1960), Spanish chemist Jean Youatt (born 1925), Australian chemist, biochemist, and microbiologist Ada Yonath (born 1939), Israeli crystallographer, Nobel prize in chemistry 2009 Glaci Zancan (1935–2007), Brazilian biochemist, president of the Brazilian Society for the Progress of the Science (SBPC) from 1999 to 2003 21st century Heather C. Allen, American chemist whose research focuses air-liquid interfaces Rommie Amaro, American chemist focusing on development of computational methods in biophysics for applications to drug discovery. Emily Balskus, American organic and biological chemist, and microbiologist. Recipient of the 2020 Alan T. Waterman Award for her work on understanding the chemistry of metabolic processes. Professor at Harvard University. Natalie Banerji, Swiss chemist and Professor of Chemistry at the University of Bern who studies organic and hybrid materials using ultrafast spectroscopies. Jane P. Chang, chemical engineer, materials scientist and professor at UCLA known for her research developing advanced atomic layer deposition (ALD) and etching techniques with applications in microelectronics and energy storage devices. Sherry Chemler, American Organic Chemist. Professor University at Buffalo. ACS Cope Scholar Award recipient (2017). Paulette Clancy, British chemist focusing on computational and machine learning methods, particularly chemistry-informed Bayesian optimization, to model the behavior of semiconductor materials. Sheila Hobbs DeWitt, American chemist. Chair, President, CEO, Cofounder of DeuteRx which has developed PXL065 a Deuterated drug. ACS Kathryn C. Hach Award for Entrepreneurial Success (2025). She is a pioneer of Combinatorial Chemistry. Elena Galoppini, Italian chemist and professor at Rutgers University–Newark whose research focuses on the development of redox- and photo-active molecules to modify surfaces. Clare Grey, British chemist pioneering the use of nuclear magnetic resonance spectroscopy to study battery technology. Awarded the Körber European Science Prize in 2021. Professor at the University of Cambridge. Paula T. Hammond, American chemical engineer focusing on macromolecular design and synthesis of materials for drug delivery systems, particularly in relation to cancer, immunology, and immunotherapy. Professor at MIT. Jeanne Hardy, American biophysicist and chemical biologist. Known for her work in the design of allosteric binding sites and control elements into human proteases. Professor at the University of Massachusetts. Geraldine Harriman, American Organic Chemist. Developed Firsocostat. Chief Scientific Officer and co-founder of HotSpot. Rachel Haurwitz, American biochemist and structural biologist. Her work regards CRISPR based technologies, she is a cofounder of Caribou Biosciences, a genome editing and cell therapy development company. Kim Eunkyoung, South Korean materials chemist known for her work in electrochromic (EC) materials design Katja Loos, German polymer chemist working on the design, synthesis, and characterisation of novel and sustainable polymeric materials and macromolecules. Chair of the board of the Zernike Institute for Advanced Materials. Professor at the University of Groningen. Rachel Mamlok-Naaman, Israeli chemist, specialized in chemistry education Lisa Marcaurelle, American synthetic chemist in industry Corine Mathonière, French materials chemist studying molecular magnetism, spin crossover molecules, and coordination chemistry Catherine J. Murphy, American chemist Nga Lee (Sally) Ng, atmospheric chemist studying particulates and their effects on air quality, climate, and human health Sarah O'Connor, American plant synthetic biologist working in England Kimberly Prather, American atmospheric chemist whose research contributed to understanding of atmospheric aerosols and their impact on air quality, climate, and human health Gillian Reid, British inorganic chemist. President elect (2020-present) and present (2022-present) of the Royal Society of Chemistry. Professor at the University of Southampton. Sarah E Reisman, American organic chemist Magdalena Titirici, materials chemist focusing on sustainable materials for energy applications. Professor at Imperial College London. Claudia Turro, American inorganic chemist who studies light-initiated reactions of metal complexes with application to disease treatment and solar energy conversion. Seble Wagaw, American process chemist and pharma exec Marcey Lynn Waters, American chemical biologist and supramolecular chemist Jenny Y Yang, American chemist and clean energy researcher at UCI Wendy Young, American medicinal chemist and pharmaceutical executive. Chair of ACS Medicinal Chemistry Division (2017). Jaqueline Kiplinger, American chemist working at the Los Alamos National Laboratory See also List of female mass spectrometrists 1904 petition to the Chemical Society References Chemists Women chemists Lists of chemists
Women in chemistry
Chemistry
3,274
40,392,854
https://en.wikipedia.org/wiki/Diathermal%20wall
In thermodynamics, a diathermal wall between two thermodynamic systems allows heat transfer but does not allow transfer of matter across it. The diathermal wall is important because, in thermodynamics, it is customary to assume a priori, for a closed system, the physical existence of transfer of energy across a wall that is impermeable to matter but is not adiabatic, transfer which is called transfer of energy as heat, though it is not customary to label this assumption separately as an axiom or numbered law. Definitions of transfer of heat In theoretical thermodynamics, respected authors vary in their approaches to the definition of quantity of heat transferred. There are two main streams of thinking. One is from a primarily empirical viewpoint (which will here be referred to as the thermodynamic stream), to define heat transfer as occurring only by specified macroscopic mechanisms; loosely speaking, this approach is historically older. The other (which will here be referred to as the mechanical stream) is from a primarily theoretical viewpoint, to define it as a residual quantity calculated after transfers of energy as macroscopic work, between two bodies or closed systems, have been determined for a process, so as to conform with the principle of conservation of energy or the first law of thermodynamics for closed systems; this approach grew in the twentieth century, though was partly manifest in the nineteenth. Thermodynamic stream of thinking In the thermodynamic stream of thinking, the specified mechanisms of heat transfer are conduction and radiation. These mechanisms presuppose recognition of temperature; empirical temperature is enough for this purpose, though absolute temperature can also serve. In this stream of thinking, quantity of heat is defined primarily through calorimetry. Though its definition of them differs from that of the mechanical stream of thinking, the empirical stream of thinking nevertheless presupposes the existence of adiabatic enclosures. It defines them through the concepts of heat and temperature. These two concepts are coordinately coherent in the sense that they arise jointly in the description of experiments of transfer of energy as heat. Mechanical stream of thinking In the mechanical stream of thinking about closed systems, heat transferred is defined as a calculated residual amount of energy transferred after the energy transferred as work has been determined, assuming for the calculation the law of conservation of energy, without reference to the concept of temperature. There are five main elements of the underlying theory. The existence of states of thermodynamic equilibrium, determinable by precisely one (called the non-deformation variable) more variable of state than the number of independent work (deformation) variables. That a state of internal thermodynamic equilibrium of a body have a well defined internal energy, that is postulated by the first law of thermodynamics. The universality of the law of conservation of energy. The recognition of work as a form of energy transfer. The universal irreversibility of natural processes. The existence of adiabatic enclosures. The existence of walls permeable only to heat. Axiomatic presentations of this stream of thinking vary slightly, but they intend to avoid the notions of heat and of temperature in their axioms. It is essential to this stream of thinking that heat is not presupposed as being measurable by calorimetry. It is essential to this stream of thinking that, for the specification of the thermodynamic state of a body or closed system, in addition to the variables of state called deformation variables, there be precisely one extra real-number-valued variable of state, called the non-deformation variable, though it should not be axiomatically recognized as an empirical temperature, even though it satisfies the criteria for one. Accounts of the diathermal wall As mentioned above, a diathermal wall may pass energy as heat by thermal conduction, but not the matter. A diathermal wall can move and thus be a part of a transfer of energy as work. Amongst walls that are impermeable to matter, diathermal and adiabatic walls are contraries. For radiation, some further comments may be useful. In classical thermodynamics, one-way radiation, from one system to another, is not considered. Two-way radiation between two systems is one of the two mechanisms of transfer of energy as heat. It may occur across a vacuum, with the two systems separated from the intervening vacuum by walls that are permeable only to radiation; such an arrangement fits the definition of a diathermal wall. The balance of radiative transfer is transfer of heat. In thermodynamics, it is not necessary that the radiative transfer of heat be of pure black-body radiation, nor of incoherent radiation. Of course black-body radiation is incoherent. Thus laser radiation counts in thermodynamics as a one-way component of two-way radiation that is heat transfer. Also, by the [Helmholtz reciprocity] principle, the target system radiates into the laser source system, though of course relatively weakly compared with the laser light. According to Planck, an incoherent monochromatic beam of light transfers entropy and has a temperature. For a transfer to qualify as work, it must be reversible in the surroundings, for example in the concept of a reversible work reservoir. Laser light is not reversible in the surroundings and is therefore a component of transfer of energy as heat, not work. In radiative transfer theory, one-way radiation is considered. For investigation of Kirchhoff's law of thermal radiation the notions of absorptivity and emissivity are necessary, and they rest on the idea of one-way radiation. These things are important for the study of the Einstein coefficients, which relies partly on the notion of thermodynamic equilibrium. For the thermodynamic stream of thinking, the notion of empirical temperature is coordinately presupposed in the notion of heat transfer for the definition of an adiabatic wall. For the mechanical stream of thinking, the exact way in which the walls are defined is important. In the presentation of Carathéodory, it is essential that the definition of the adiabatic wall should in no way depend upon the notions of heat or temperature. This is achieved by careful wording and reference to transfer of energy only as work. Buchdahl is careful in the same way. Nevertheless, Carathéodory explicitly postulates the existence of walls that are permeable only to heat, that is to say impermeable to work and to matter, but still permeable to energy in some unspecified way; they are called diathermal walls. One might be forgiven for inferring from this that heat is energy in transfer across walls permeable only to heat, and that such are admitted to exist unlabeled as postulated primitives. The mechanical stream of thinking thus regards the adiabatic enclosure's property of not allowing the transfer of heat across itself as a deduction from the Carathéodory axioms of thermodynamics, and regards transfer as heat as a residual rather than a primary concept. References Bibliography Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Beattie, J.A., Oppenheim, I. (1979). Principles of Thermodynamics, Elsevier, Amsterdam, . Born, M. (1921). Kritische Betrachtungen zur traditionellen Darstellung der Thermodynamik, Physik. Zeitschr. 22: 218–224. Bryan, G.H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig. Buchdahl, H.A. (1957/1966). The Concepts of Classical Thermodynamics, Cambridge University Press, London. A translation may be found here. A partly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081. Kirkwood, J.G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw–Hill, New York. Planck. M. (1914). The Theory of Heat Radiation, a translation by Masius, M. of the second German edition, P. Blakiston's Son & Co., Philadelphia. Thermodynamics
Diathermal wall
Physics,Chemistry,Mathematics
1,867
58,463,477
https://en.wikipedia.org/wiki/Ohio%20water%20resource%20region
The Ohio water resource region is one of 21 major geographic areas, or regions, in the first level of classification used by the United States Geological Survey to divide and sub-divide the United States into successively smaller hydrologic units. These geographic areas contain either the drainage area of a major river, or the combined drainage areas of a series of rivers. The Ohio region, which is listed with a 2-digit hydrologic unit code (HUC) of 05, has an approximate size of , and consists of 14 subregions, which are listed with the 4-digit HUCs 0501 through 0514. This region includes the drainage of the Ohio River Basin, excluding the Tennessee River Basin. Includes parts of Illinois, Indiana, Kentucky, Maryland, New York, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia and West Virginia. List of water resource subregions See also List of rivers in the United States Water resource region External links References Lists of drainage basins Drainage basins Watersheds of the United States Regions of the United States Water resource regions
Ohio water resource region
Environmental_science
211
78,522,963
https://en.wikipedia.org/wiki/Matrix%20%28app%29
Matrix was an instant messaging (IM) and communications network that was shut down by cooperation between Dutch and French police in December 2024. It was also known by the names Mactrix, Totalsec, X-Quantum and Q-Safe. History Dutch police discovered the network while investigating the murder of Peter de Vries in 2021. A mobile phone was found in the getaway car. The phone was found to be connected to the Matrix network. A joint task force involving Dutch and French police was formed, with 2.3 million messages in 33 languages intercepted. Police forces in Italy, Lithuania, Spain and Germany were also involved. Europol and Eurojust coordinated the investigation. Infrastructure The service operated on around 40 servers throughout Europe and had about 8000 users. Users paid between $1350 and $1700 in cryptocurrency for a Google Pixel handset and six months subscription to the service. Seizures and arrests Simultaneous raids and searches took place on 3 December 2024 in four countries, leading to the arrest of five suspects in Spain and France, the shutdown of 40 servers in France and Germany, the seizure of 970 encrypted phones, €145,000 in cash, €500,000 in cryptocurrency and four vehicles. A 52-year-old Lithuanian man who was arrested is believed to be the primary owner and operator. A 30-year-old Dutch man who used the app was arrested - he is suspected of smuggling cocaine in 2020. Closure Users of the network were informed of it closing by a splash screen. See also Operation Trojan Shield - a sting operation to intercept messages from criminals via the AN0M app EncroChat – a network infiltrated by law enforcement to investigate organized crime in Europe Ennetcom – a network seized by Dutch authorities, who used it to make arrests Sky Global – a communications network and service provider based in Vancouver, Canada References Anonymity networks Cyberspace Dark web Defunct darknet markets Distributed computing architecture End-to-end encryption File sharing Internet architecture Internet culture Network architecture Law enforcement operations in France Crime in the Netherlands
Matrix (app)
Technology,Engineering
425
50,234,511
https://en.wikipedia.org/wiki/Resistance-nodulation-cell%20division%20superfamily
Resistance-nodulation-division (RND) family transporters are a category of bacterial efflux pumps, especially identified in Gram-negative bacteria and located in the cytoplasmic membrane, that actively transport substrates. The RND superfamily includes seven families: the heavy metal efflux (HME), the hydrophobe/amphiphile efflux-1 (gram-negative bacteria), the nodulation factor exporter family (NFE), the SecDF protein-secretion accessory protein family, the hydrophobe/amphiphile efflux-2 family, the eukaryotic sterol homeostasis family, and the hydrophobe/amphiphile efflux-3 family. These RND systems are involved in maintaining homeostasis of the cell, removal of toxic compounds, and export of virulence determinants. They have a broad substrate spectrum and can lead to the diminished activity of unrelated drug classes if over-expressed. The first reports of drug resistant bacterial infections were reported in the 1940s after the first mass production of antibiotics. Most of the RND superfamily transport systems are made of large polypeptide chains. RND proteins exist primarily in gram-negative bacteria but can also be found in gram-positive bacteria, archaea, and eukaryotes. Function The RND protein dictates the substrate for the completed transport systems including: metal ions, xenobiotics or drugs. Transport of hydrophobic and amphiphilic compounds are carried out by the HAE-RND subfamily. While the efflux of heavy metals are preformed HME-RND. Mechanism and structure RND proteins are large and can include more than 1000 amino acid residues. They are generally composed of two homologous subunits (suggesting they arose as a result of an intragenic tandem duplication event that occurred in the primordial system prior to divergence of the family members) each containing a periplasmic loop adjacent to 12 transmembrane helices. Of the twelve helices there is a single transmembrane spanner (TMS) at the N-terminus followed by a large extracytoplasmic domain, then six additional TMSs, a second large extracytoplasmic domain, and five final C-terminal TMSs. TM4 governs the specificity for a particular substrate in a given RND protein. Therefore, TM4 can be an indicator for RND specificity without explicit knowledge of the remainder of the protein. RND pumps are the cytoplasmic residing portion of a complete tripartite complex (Fig. 1) which spreads across the outer-membrane and the inner membrane of gram-negative bacteria, also commonly referred to as the CBA efflux system. The RND protein associates with an outer membrane channel and a periplasmic adaptor protein, and the association of all three proteins allows the system to export substrates into the external medium, providing a huge advantage for the bacteria. The CusA protein, a HME-RND member transporter, was able to be crystallized providing valuable structural information of HME-RND pumps. CusA exists as a homotrimer with each unit consisting of 12 transmembrane helices (TM1-TM12). The periplasmic domain consists of two helices, TM2 and TM8. In addition, the periplasmic domain is made up of six subdomains, PN1, PN2, PC1, PC2, DN, DC, which form a central pore and a dock domain. The central pore is formed by PN1, PN2, PC1, PC2, and together stabilize the trimeric organization of the homotrimer. Metal ion efflux (HME-RND) The HME-RND family functions as the central protein pump in metal ion efflux powered by a proton-substrate antiport. The family includes pumps which export monovalent metals—the Cus system, and pumps which export divalent metals—the Czc system. Heavy metal resistance by the RND family was first discovered in R. metallidurans through the CzcA and later the CnrA protein. The best characterized RND proteins include CzcCBA (Cd2+, Zn2+, and Co2+), CnrCBA (Ni2+ and Co2+), and NccCBA (Ni2+, Co2+ and Cd2+) in Cupriavidus, Czr (Cd2+ and Zn2+ resistance) in Pseudomonas aeruginosa, and Czn (Cd2+, Zn2+, and Ni2+ resistance) in Helicobacter pylori. It has been proposed that metal-ion efflux occurs from the cytoplasm and periplasm based on the location of multiple substrate binding sites on the RND protein. CznCBA The Czn system maintains homeostasis of Cadmium, Zinc, and Nickel resistance; it is involved in Urease modulation, and gastric colonization by H. pylori. The CznC and CznA proteins play the dominating role in nickel homeostasis. CzcCBA Czc confers resistance to Cobalt, Zinc, and Cadmium. The CzcCBA operon includes: CzcA (the RND family specific protein), the membrane fusion protein (MFP) CzcB, and the outer membrane factor protein (OMF) CzcC, all of which form the active tripartite complex, and the czcoperon. Expression of the operon is regulated through metal ions. Drug resistance (HAE-RND) The RND family plays an important role in producing intrinsic and elevated multi-drug resistance in gram-negative bacteria. The export of amphiphilic and hydrophobic substrates is governed by the HAE-RND family. In E. coli five RND pumps have been specifically identified: AcrAB, AcrAD, AcrEF, MdtEF, and MdtAB. Although it is not clear how the tripartite complex works in bacteria two mechanisms have been proposed: Adaptor Bridging Model and Adaptor Wrapping Model. HAE-RNDs involvement in the detoxification and exportation of organic substrates allowed for recent characterization of specific pumps due to their increasing medical relevance. Half of the antibiotic resistance demonstrated in in vivo hospital strains of Pseudomonas aeruginosa was attributed to RND efflux proteins. P. aeruginosa contain 13 RND transport systems, including one HME-RND and the remaining HAE-RNDs. Among the best identified are the Mex proteins: MexB, MexD, and MexF, which detoxify organic substances. It is proposed that the MexB systems demonstrates substrate specificity for beta-lactams; while the MexD-system expresses specificity for cepheme compounds. E. coli – AcrB In E. coli multi-drug resistance develops from a variety of mechanisms. Particularly concerning is the ability of efflux mechanisms to confer broad-spectrum resistance. RND efflux pumps provide extrusion for a range of compounds. Five protein transporters in E. coli cells that belong to the HAE-RND subfamily have been classified, including the multi-drug efflux protein AcrB, the outer membrane protein TolC and the periplasmic adaptor protein AcrA. The TolC and AcrA proteins are also utilized in the tripartite complex in other identified RND efflux proteins. The AcrAB-TolC efflux system is responsible for the efflux of antimicrobial drugs like penicillin G, cloxacillin, nafcillin, macrolides, novobiocin, linezolid, and fusidic acid antibiotics. Other substrates include dyes, detergents, some organic solvents, and steroid hormones. The ways in which the lipophilic domains of the substrate and the RND pumps is not completely defined. The crystallized AcrB protein, provides insight into the mechanism of action of HAE-RND proteins, and other RND family proteins. Multidrug transport (Mdt) efflux Mdt(A) is an efflux pump that confers resistance to a variety of drugs. It is expressed in L. lactis, E. coli and various other bacteria. Unlike other RND proteins Mdt(A) contains a putative ATP-binding site and two C-motifs conserved in its fifth TMS. Mdt is effective at providing the bacteria with resistance to tetracycline, chloramphenicol, lincosamides and streptomycin. The source of energy for active efflux by Mdt(A) is currently unknown. References Protein superfamilies Bacterial proteins Integral membrane proteins Antimicrobial resistance
Resistance-nodulation-cell division superfamily
Biology
1,888
761,055
https://en.wikipedia.org/wiki/Euler%27s%20Disk
Euler's Disk, invented between 1987 and 1990 by Joseph Bendik, is a trademarked scientific educational toy. It is used to illustrate and study the dynamic system of a spinning and rolling disk on a flat or curved surface. It has been the subject of several scientific papers. Bendik named the toy after mathematician Leonhard Euler. Discovery Joseph Bendik first noted the interesting motion of the spinning disk while working at Hughes Aircraft (Carlsbad Research Center) after spinning a heavy polishing chuck on his desk at lunch one day. The apparatus is a dramatic visualization of energy exchanges in three different, tightly coupled processes. As the disk gradually decreases its azimuthal rotation, there is also a decrease in amplitude and increase in the frequency of the disk's axial precession. The evolution of the disk's axial precession is easily visualized in a slow motion video by looking at the side of the disk following a single point marked on the disk. The evolution of the rotation of the disk is easily visualized in slow motion by looking at the top of the disk following an arrow drawn on the disk representing its radius. As the disk releases the initial energy given by the user and approaches a halt, its rotation about the vertical axis slows, while its contact point oscillation increases. Lit from above, its contact point and nearby lower edge in shadow, the disk appears to levitate before halting. The commercial toy consists of a heavy, thick chrome-plated steel disk and a rigid, slightly concave, mirrored base. Included holographic magnetic stickers can be attached to the disk, to enhance the visual effect of wobbling. These attachments may make it harder to see and understand the processes at work, however. When spun on a flat surface, the disk exhibits a spinning/rolling motion, slowly progressing through varying rates and types of motion before coming to rest. Most notably, the precession rate of the disk's axis of symmetry increases as the disk spins down. The mirror base provides a low-friction surface; its slight concavity keeps the disk from "wandering" off the surface. Any disk, spun on a reasonably flat surface (such as a coin spun on a table), will exhibit essentially the same type of motion as an Euler Disk, but for a much shorter time. Commercial disks provide a more effective demonstration of the phenomenon, having an optimized aspect ratio and a precision polished, slightly rounded edge to maximize the spinning/rolling time. Physics A spinning/rolling disk ultimately comes to rest quite abruptly, the final stage of motion being accompanied by a whirring sound of rapidly increasing frequency. As the disk rolls, the point of rolling contact describes a circle that oscillates with a constant angular velocity . If the motion is non-dissipative (frictionless), is constant, and the motion persists forever; this is contrary to observation, since is not constant in real life situations. In fact, the precession rate of the axis of symmetry approaches a finite-time singularity modeled by a power law with exponent approximately −1/3 (depending on specific conditions). There are two conspicuous dissipative effects: rolling friction when the disk slips along the surface, and air drag from the resistance of air. Experiments show that rolling friction is mainly responsible for the dissipation and behavior—experiments in a vacuum show that the absence of air affects behavior only slightly, while the behavior (precession rate) depends systematically on coefficient of friction. In the limit of small angle (i.e. immediately before the disk stops spinning), air drag (specifically, viscous dissipation) is the dominant factor, but prior to this end stage, rolling friction is the dominant effect. Steady motion with the disk center at rest The behavior of a spinning disk whose center is at rest can be described as follows. Let the line from the center of the disk to the point of contact with the plane be called axis . Since the center of the disk and the point of contact are instantaneously at rest (assuming there is no slipping) axis is the instantaneous axis of rotation. The angular momentum is which holds for any thin, circularly symmetric disk with mass ; for a disk with mass concentrated at the rim, for a uniform disk (like Euler disk), is the radius of the disk, and is the angular velocity along . The contact force is where is the gravitational acceleration and is the vertical axis pointing upwards. The torque about the center of mass is which we can rewrite as where . We can conclude that both the angular momentum , and the disk are precessing about the vertical axis at rate At the same time is the angular velocity of the point of contact with the plane. With axis defined to lie along the symmetry axis of the disk and pointing downwards, it holds that , where is the inclination angle of the disc with respect to the horizontal plane. The angular velocity can be thought of as composed of two parts , where is the angular velocity of the disk along its symmetry axis. From the geometry it is easily concluded that: Plugging into equation () finally gets As adiabatically approaches zero, the angular velocity of the point of contact becomes very large, making a high-frequency sound associated with the spinning disk. However, the rotation of the figure on the face of the coin, whose angular velocity is approaches zero. The total angular velocity also vanishes as well as the total energy as approaches zero, using the equation (). As approaches zero the disk finally loses contact with the table and the disk then quickly settles on to the horizontal surface. One hears sound at a frequency , which becomes dramatically higher, , as the figure rotation rate slows, , until the sound abruptly ceases. Levitation illusion As a circularly symmetric disk settles, the separation between a fixed point on the supporting surface and the moving disk above oscillates at increasing frequency, in sync with the rotation axis angle off vertical. The levitation illusion results when the disk edge reflects light when tilted slightly up above the supporting surface, and in shadow when tilted slightly down in contact. The shadow is not perceived, and the rapidly flashing reflections from the edge above supporting surface are perceived as steady elevation. See persistence of vision. The levitation illusion can be enhanced by optimizing the curve of the lower edge so the shadow line remains high as the disk settles. A mirror can further enhance the effect by hiding the support surface and showing separation between moving disk surface and mirror image. Disk imperfections, seen in shadow, that could hamper the illusion, can be hidden in a skin pattern that blurs under motion. US Quarter example A clean US Quarter (minted 1970–2022), rotating on a flat hand mirror, viewed from the side near the mirror surface, demonstrates the phenomenon for a few seconds. Lit by a point source directly over the center of the soon to settle quarter, side ridges are illuminated when the rotation axis is away from the viewer, and in shadow when the rotation axis is toward the viewer. Vibration blurs the ridges and heads or tails is too foreshortened to show rotation. History of research Moffatt In the early 2000s, research was sparked by an article in the April 20, 2000 edition of Nature, where Keith Moffatt showed that viscous dissipation in the thin layer of air between the disk and the table would be sufficient to account for the observed abruptness of the settling process. He also showed that the motion concluded in a finite-time singularity. His first theoretical hypothesis was contradicted by subsequent research, which showed that rolling friction is actually the dominant factor. Moffatt showed that, as time approaches a particular time (which is mathematically a constant of integration), the viscous dissipation approaches infinity. The singularity that this implies is not realized in practice, because the magnitude of the vertical acceleration cannot exceed the acceleration due to gravity (the disk loses contact with its support surface). Moffatt goes on to show that the theory breaks down at a time before the final settling time , given by where is the radius of the disk, is the acceleration due to Earth's gravity, the dynamic viscosity of air, and the mass of the disk. For the commercially available Euler's Disk toy (see link in "External links" below), is about seconds, at which time the angle between the coin and the surface, , is approximately 0.005 radians and the rolling angular velocity, , is about 500 Hz. Using the above notation, the total spinning/rolling time is: where is the initial inclination of the disk, measured in radians. Moffatt also showed that, if , the finite-time singularity in is given by Experimental results Moffatt's theoretical work inspired several other scientists to experimentally investigate the dissipative mechanism of a spinning/rolling disk, with results that partially contradicted his explanation. These experiments used spinning objects and surfaces of various geometries (disks and rings), with varying coefficients of friction, both in air and in a vacuum, and used instrumentation such as high speed photography to quantify the phenomenon. In the 30 November 2000 issue of Nature, physicists Van den Engh, Nelson and Roach discuss experiments in which disks were spun in a vacuum. Van den Engh used a rijksdaalder, a Dutch coin, whose magnetic properties allowed it to be spun at a precisely determined rate. They found that slippage between the disk and the surface could account for observations, and the presence or absence of air only slightly affected the disk's behavior. They pointed out that Moffatt's theoretical analysis would predict a very long spin time for a disk in a vacuum, which was not observed. Moffatt responded with a generalized theory that should allow experimental determination of which dissipation mechanism is dominant, and pointed out that the dominant dissipation mechanism would always be viscous dissipation in the limit of small (i.e., just before the disk settles). Later work at the University of Guelph by Petrie, Hunt and Gray showed that carrying out the experiments in a vacuum (pressure 0.1 pascal) did not significantly affect the energy dissipation rate. Petrie et al. also showed that the rates were largely unaffected by replacing the disk with a ring shape, and that the no-slip condition was satisfied for angles greater than 10°. Another work by Caps, Dorbolo, Ponte, Croisier, and Vandewalle has concluded that the air is a minor source of energy dissipation. The major energy dissipation process is the rolling and slipping of the disk on the supporting surface. It was experimentally shown that the inclination angle, the precession rate, and the angular velocity follow the power law behavior. On several occasions during the 2007–2008 Writers Guild of America strike, talk show host Conan O'Brien would spin his wedding ring on his desk, trying to spin the ring for as long as possible. The quest to achieve longer and longer spin times led him to invite MIT professor Peter Fisher onto the show to experiment with the problem. Spinning the ring in a vacuum had no identifiable effect, while a Teflon spinning support surface gave a record time of 51 seconds, corroborating the claim that rolling friction is the primary mechanism for kinetic energy dissipation. Various kinds of rolling friction as primary mechanism for energy dissipation have been studied by Leine who confirmed experimentally that the frictional resistance of the movement of the contact point over the rim of the disk is most likely the primary dissipation mechanism on a time-scale of seconds. In popular culture Euler's Disks appear in the 2006 film Snow Cake and in the TV show The Big Bang Theory, season 10, episode 16, which aired February 16, 2017. The sound team for the 2001 film Pearl Harbor used a spinning Euler's Disk as a sound effect for torpedoes. A short clip of the sound team playing with Euler's Disk was played during the Academy Awards presentations. The principles of the Euler Disk were used with specially made rings on a table as a futuristic recording medium in the 1960 movie The Time Machine. See also List of topics named after Leonhard Euler Tippe top – another spinning physics toy that exhibits surprising behavior References External links Eulersdisk.com The physics of a spinning coin (April 20, 2000) PhysicsWeb Experimental and theoretical investigation of the energy dissipation of a rolling disk during its final stage of motion (December 12, 2008) Arch Appl Mech Comment on Moffat’s Disk (March 31, 2002) Detailed mathematical physics analysis of disk motion A YouTube video of an Euler's Disk in action Dynamical systems Educational toys Spinning tops Novelty items
Euler's Disk
Physics,Mathematics
2,636
2,902,847
https://en.wikipedia.org/wiki/66%20Arietis
66 Arietis (abbreviated 66 Ari) is a double star in the northern constellation of Aries. 66 Arietis is the Flamsteed designation. It has an apparent visual magnitude of 6.03, putting it near the limit for naked eye visibility. The magnitude 10.4 companion is located at an angular separation of 0.810 arcseconds from the primary along a position angle of 65°. The distance to this pair, as determined from parallax measurements made during the Hipparcos mission, is approximately . The spectrum of the primary component matches a stellar classification of K0 IV, with the luminosity class of IV indicating this is a subgiant star. It has 6 times the radius of the Sun and shines with 18 times the Sun's energy. This energy is radiated from the outer atmosphere at an effective temperature of 4,864 K, giving it the cool orange-hued glow of a K-type star. References External links HR 1048 CCDM J03284 +2248 Image 66 Arietis Arietis, 66 021467 Double stars 016181 Aries (constellation) K-type subgiants 1048 Durchmusterung objects
66 Arietis
Astronomy
255
12,505
https://en.wikipedia.org/wiki/Galilean%20moons
The Galilean moons (), or Galilean satellites, are the four largest moons of Jupiter: Io, Europa, Ganymede, and Callisto. They are the most readily visible Solar System objects after Saturn, the dimmest of the classical planets; though their closeness to bright Jupiter makes naked-eye observation very difficult, they are readily seen with common binoculars, even under night sky conditions of high light pollution. The invention of the telescope enabled the discovery of the moons in 1610. Through this, they became the first Solar System objects discovered since humans have started tracking the classical planets, and the first objects to be found to orbit any planet beyond Earth. They are planetary-mass moons and among the largest objects in the Solar System. All four, along with Titan, Triton, and Earth's Moon, are larger than any of the Solar System's dwarf planets. The largest, Ganymede, is the largest moon in the Solar System and surpasses the planet Mercury in size (though not mass). Callisto is only slightly smaller than Mercury in size; the smaller ones, Io and Europa, are about the size of the Moon. The three inner moons — Io, Europa, and Ganymede — are in a 4:2:1 orbital resonance with each other. While the Galilean moons are spherical, all of Jupiter's remaining moons have irregular forms because they are too small for their self-gravitation to pull them into spheres. The Galilean moons are named after Galileo Galilei, who observed them in either December 1609 or January 1610, and recognized them as satellites of Jupiter in March 1610; they remained the only known moons of Jupiter until the discovery of the fifth largest moon of Jupiter Amalthea in 1892. Galileo initially named his discovery the Cosmica Sidera ("Cosimo's stars") or Medicean Stars, but the names that eventually prevailed were chosen by Simon Marius. Marius discovered the moons independently at nearly the same time as Galileo, 8 January 1610, and gave them their present individual names, after mythological characters that Zeus seduced or abducted, which were suggested by Johannes Kepler in his Mundus Jovialis, published in 1614. Their discovery showed the importance of the telescope as a tool for astronomers by proving that there were objects in space that cannot be seen by the naked eye. The discovery of celestial bodies orbiting something other than Earth dealt a serious blow to the then-accepted (among educated Europeans) Ptolemaic world system, a geocentric theory in which everything orbits around Earth. History Discovery As a result of improvements that Galileo Galilei made to the telescope, with a magnifying capability of 20×, he was able to see celestial bodies more distinctly than was previously possible. This allowed Galileo to observe in either December 1609 or January 1610 what came to be known as the Galilean moons. On 7 January 1610, Galileo wrote a letter containing the first mention of Jupiter's moons. At the time, he saw only three of them, and he believed them to be fixed stars near Jupiter. He continued to observe these celestial orbs from 8 January to 2 March 1610. In these observations, he discovered a fourth body, and also observed that the four were not fixed stars, but rather were orbiting Jupiter. Galileo's discovery proved the importance of the telescope as a tool for astronomers by showing that there were objects in space to be discovered that until then had remained unseen by the naked eye. More importantly, the discovery of celestial bodies orbiting something other than Earth dealt a blow to the then-accepted Ptolemaic world system, which held that Earth was at the center of the universe and all other celestial bodies revolved around it. Galileo's 13 March 1610, Sidereus Nuncius (Starry Messenger), which announced celestial observations through his telescope, does not explicitly mention Copernican heliocentrism, a theory that placed the Sun at the center of the universe. Nevertheless, Galileo accepted the Copernican theory. A Chinese historian of astronomy, Xi Zezong, has claimed that a "small reddish star" observed near Jupiter in 364 BCE by Chinese astronomer Gan De may have been Ganymede. If true, this might predate Galileo's discovery by around two millennia. The observations of Simon Marius are another noted example of observation, and he later reported observing the moons in 1609. However, because he did not publish these findings until after Galileo, there is a degree of uncertainty around his records. Names In 1605, Galileo had been employed as a mathematics tutor for Cosimo de' Medici. In 1609, Cosimo became Grand Duke Cosimo II of Tuscany. Galileo, seeking patronage from his now-wealthy former student and his powerful family, used the discovery of Jupiter's moons to gain it. On 13 February 1610, Galileo wrote to the Grand Duke's secretary: "God graced me with being able, through such a singular sign, to reveal to my Lord my devotion and the desire I have that his glorious name live as equal among the stars, and since it is up to me, the first discoverer, to name these new planets, I wish, in imitation of the great sages who placed the most excellent heroes of that age among the stars, to inscribe these with the name of the Most Serene Grand Duke." Galileo initially called his discovery the Cosmica Sidera ("Cosimo's stars"), in honour of Cosimo alone. Cosimo's secretary suggested to change the name to Medicea Sidera ("the Medician stars"), honouring all four Medici brothers (Cosimo, Francesco, Carlo, and Lorenzo). The discovery was announced in the Sidereus Nuncius ("Starry Messenger"), published in Venice in March 1610, less than two months after the first observations. On 12 March 1610, Galileo wrote his dedicatory letter to the Duke of Tuscany, and the next day sent a copy to the Grand Duke, hoping to obtain the Grand Duke's support as quickly as possible. On 19 March, he sent the telescope he had used to first view Jupiter's moons to the Grand Duke, along with an official copy of Sidereus Nuncius (The Starry Messenger) that, following the secretary's advice, named the four moons the Medician Stars. In his dedicatory introduction, Galileo wrote: Scarcely have the immortal graces of your soul begun to shine forth on earth than bright stars offer themselves in the heavens which, like tongues, will speak of and celebrate your most excellent virtues for all time. Behold, therefore, four stars reserved for your illustrious name ... which ... make their journeys and orbits with a marvelous speed around the star of Jupiter ... like children of the same family ... Indeed, it appears the Maker of the Stars himself, by clear arguments, admonished me to call these new planets by the illustrious name of Your Highness before all others. Other names put forward include: I. Principharus (for the "prince" of Tuscany), II. Victripharus (after Vittoria della Rovere), III. Cosmipharus (after Cosimo de' Medici) and IV. Fernipharus (after Duke Ferdinando de' Medici) – by Giovanni Battista Hodierna, a disciple of Galileo and author of the first ephemerides (Medicaeorum Ephemerides, 1656); Circulatores Jovis, or Jovis Comites – by Johannes Hevelius; Gardes, or Satellites (from the Latin satelles, satellitis, meaning "escorts") – by Jacques Ozanam. The names that eventually prevailed were chosen by Simon Marius, who discovered the moons independently at the same time as Galileo: he named them at the suggestion of Johannes Kepler after lovers of the god Zeus (the Greek equivalent of Jupiter), in his Mundus Jovialis, published in 1614: Jupiter is much blamed by the poets on account of his irregular loves. Three maidens are especially mentioned as having been clandestinely courted by Jupiter with success. Io, daughter of the River Inachus, Callisto of Lycaon, Europa of Agenor. Then there was Ganymede, the handsome son of King Tros, whom Jupiter, having taken the form of an eagle, transported to heaven on his back, as poets fabulously tell... I think, therefore, that I shall not have done amiss if the First is called by me Io, the Second Europa, the Third, on account of its majesty of light, Ganymede, the Fourth Callisto... This fancy, and the particular names given, were suggested to me by Kepler, Imperial Astronomer, when we met at Ratisbon fair in October 1613. So if, as a jest, and in memory of our friendship then begun, I hail him as joint father of these four stars, again I shall not be doing wrong. Galileo steadfastly refused to use Marius' names and invented as a result the numbering scheme that is still used nowadays, in parallel with proper moon names. The numbers run from Jupiter outward, thus I, II, III and IV for Io, Europa, Ganymede, and Callisto respectively. Galileo used this system in his notebooks but never actually published it. The numbered names (Jupiter x) were used until the mid-20th century when other inner moons were discovered, and Marius' names became widely used. Determination of longitude Galileo's discovery had practical applications. Safe navigation required accurately determining a ship's position at sea. While latitude could be measured well enough by local astronomical observations, determining longitude required knowledge of the time of each observation synchronized to the time at a reference longitude. The longitude problem was so important that large prizes were offered for its solution at various times by Spain, Holland, and Britain. Galileo proposed determining longitude based on the timing of the orbits of the Galilean moons. The times of the eclipses of the moons could be precisely calculated in advance and compared with local observations on land or on ship to determine the local time and hence longitude. Galileo applied in 1616 for the Spanish prize of 6,000 gold ducats with a lifetime pension of 2,000 a year, and almost two decades later for the Dutch prize, but by then he was under house arrest for possible heresy. The main problem with the Jovian moon technique was that it was difficult to observe the Galilean moons through a telescope on a moving ship, a problem that Galileo tried to solve with the invention of the celatone. Others suggested improvements, but without success. Land mapping surveys had the same problem determining longitude, though with less severe observational conditions. The method proved practical and was used by Giovanni Domenico Cassini and Jean Picard to re-map France. Members Some models predict that there may have been several generations of Galilean satellites in Jupiter's early history. Each generation of moons to have formed would have spiraled into Jupiter and been destroyed, due to tidal interactions with Jupiter's proto-satellite disk, with new moons forming from the remaining debris. By the time the present generation formed, the gas in the proto-satellite disk had thinned out to the point that it no longer greatly interfered with the moons' orbits. Other models suggest that Galilean satellites formed in a proto-satellite disk, in which formation timescales were comparable to or shorter than orbital migration timescales. Io is anhydrous and likely has an interior of rock and metal. Europa is thought to contain 8% ice and water by mass with the remainder rock. These moons are, in increasing order of distance from Jupiter: Io Io (Jupiter I) is the innermost of the four Galilean moons of Jupiter; with a diameter of 3642 kilometers, it is the fourth-largest moon in the Solar System, and is only marginally larger than Earth's moon. It was named after Io, a priestess of Hera who became one of the lovers of Zeus. It was referred to as "Jupiter I", or "The first satellite of Jupiter" until the mid-20th century. With over 400 active volcanos, Io is the most geologically active object in the Solar System. Its surface is dotted with more than 100 mountains, some of which are taller than Earth's Mount Everest. Unlike most satellites in the outer Solar System (which have a thick coating of ice), Io is primarily composed of silicate rock surrounding a molten iron or iron sulfide core. Although not proven, data from the Galileo orbiter indicates that Io might have its own magnetic field. Io has an extremely thin atmosphere made up mostly of sulfur dioxide (SO2). If a surface data or collection vessel were to land on Io in the future, it would have to be extremely tough (similar to the tank-like bodies of the Soviet Venera landers) to survive the radiation and magnetic fields that originate from Jupiter. Europa Europa (Jupiter II), the second of the four Galilean moons, is the second closest to Jupiter and the smallest at 3121.6 kilometers in diameter, which is slightly smaller than Earth's Moon. The name comes from a mythical Phoenician noblewoman, Europa, who was courted by Zeus and became the queen of Crete, though the name did not become widely used until the mid-20th century. It has a smooth and bright surface, with a layer of water surrounding the mantle of the planet, thought to be 100 kilometers thick. The smooth surface includes a layer of ice, while the bottom of the ice is theorized to be liquid water. The apparent youth and smoothness of the surface have led to the hypothesis that a water ocean exists beneath it, which could conceivably serve as an abode for extraterrestrial life. Heat energy from tidal flexing ensures that the ocean remains liquid and drives geological activity. Life may exist in Europa's under-ice ocean. So far, there is no evidence that life exists on Europa, but the likely presence of liquid water has spurred calls to send a probe there. The prominent markings that criss-cross the moon seem to be mainly albedo features, which emphasize low topography. There are few craters on Europa because its surface is tectonically active and young. Some theories suggest that Jupiter's gravity is causing these markings, as one side of Europa is constantly facing Jupiter. Volcanic water eruptions splitting the surface of Europa and even geysers have also been considered as causes. The reddish-brown color of the markings is theorized to be caused by sulfur, but because no data collection devices have been sent to Europa, scientists cannot yet confirm this. Europa is primarily made of silicate rock and likely has an iron core. It has a tenuous atmosphere composed primarily of oxygen. Ganymede Ganymede (Jupiter III), the third Galilean moon, is named after the mythological Ganymede, cupbearer of the Greek gods and Zeus's beloved. Ganymede is the largest natural satellite in the Solar System at 5262.4 kilometers in diameter, which makes it larger than the planet Mercury – although only at about half of its mass since Ganymede is an icy world. It is the only satellite in the Solar System known to possess a magnetosphere, likely created through convection within the liquid iron core. Ganymede is composed primarily of silicate rock and water ice, and a salt-water ocean is believed to exist nearly 200 km below Ganymede's surface, sandwiched between layers of ice. The metallic core of Ganymede suggests a greater heat at some time in its past than had previously been proposed. The surface is a mix of two types of terrain—highly cratered dark regions and younger, but still ancient, regions with a large array of grooves and ridges. Ganymede has a high number of craters, but many are gone or barely visible due to its icy crust forming over them. The satellite has a thin oxygen atmosphere that includes O, O2, and possibly O3 (ozone), and some atomic hydrogen. Callisto Callisto (Jupiter IV) is the fourth and last Galilean moon, and is the second-largest of the four, and at 4820.6 kilometers in diameter, it is the third largest moon in the Solar System, and barely smaller than Mercury, though only a third of the latter's mass. It is named after the Greek mythological nymph Callisto, a lover of Zeus who was a daughter of the Arkadian King Lykaon and a hunting companion of the goddess Artemis. The moon does not form part of the orbital resonance that affects three inner Galilean satellites and thus does not experience appreciable tidal heating. Callisto is composed of approximately equal amounts of rock and ices, which makes it the least dense of the Galilean moons. It is one of the most heavily cratered satellites in the Solar System, and one major feature is a basin around 3000 km wide called Valhalla. Callisto is surrounded by an extremely thin atmosphere composed of carbon dioxide and probably molecular oxygen. Investigation revealed that Callisto may possibly have a subsurface ocean of liquid water at depths less than 300 kilometres. The likely presence of an ocean within Callisto indicates that it can or could harbour life. However, this is less likely than on nearby Europa. Callisto has long been considered the most suitable place for a human base for future exploration of the Jupiter system since it is furthest from the intense radiation of Jupiter's magnetic field. Comparative structure Fluctuations in the orbits of the moons indicate that their mean density decreases with distance from Jupiter. Callisto, the outermost and least dense of the four, has a density intermediate between ice and rock whereas Io, the innermost and densest moon, has a density intermediate between rock and iron. Callisto has an ancient, heavily cratered and unaltered ice surface and the way it rotates indicates that its density is equally distributed, suggesting that it has no rocky or metallic core but consists of a homogeneous mix of rock and ice. This may well have been the original structure of all the moons. The rotation of the three inner moons, in contrast, indicates differentiation of their interiors with denser matter at the core and lighter matter above. They also reveal significant alteration of the surface. Ganymede reveals past tectonic movement of the ice surface which required partial melting of subsurface layers. Europa reveals more dynamic and recent movement of this nature, suggesting a thinner ice crust. Finally, Io, the innermost moon, has a sulfur surface, active volcanism and no sign of ice. All this evidence suggests that the nearer a moon is to Jupiter the hotter its interior. The current model is that the moons experience tidal heating as a result of the gravitational field of Jupiter in inverse proportion to the square of their distance from the giant planet. In all but Callisto this will have melted the interior ice, allowing rock and iron to sink to the interior and water to cover the surface. In Ganymede a thick and solid ice crust then formed. In warmer Europa a thinner more easily broken crust formed. In Io the heating is so extreme that all the rock has melted and water has long ago boiled out into space. Size Latest flyby Origin and evolution Jupiter's regular satellites are believed to have formed from a circumplanetary disk, a ring of accreting gas and solid debris analogous to a protoplanetary disk. They may be the remnants of a score of Galilean-mass satellites that formed early in Jupiter's history. Simulations suggest that, while the disk had a relatively high mass at any given moment, over time a substantial fraction (several tenths of a percent) of the mass of Jupiter captured from the Solar nebula was processed through it. However, the disk mass of only 2% that of Jupiter is required to explain the existing satellites. Thus there may have been several generations of Galilean-mass satellites in Jupiter's early history. Each generation of moons would have spiraled into Jupiter, due to drag from the disk, with new moons then forming from the new debris captured from the Solar nebula. By the time the present (possibly fifth) generation formed, the disk had thinned out to the point that it no longer greatly interfered with the moons' orbits. The current Galilean moons were still affected, falling into and being partially protected by an orbital resonance which still exists for Io, Europa, and Ganymede. Ganymede's larger mass means that it would have migrated inward at a faster rate than Europa or Io. Tidal dissipation in the Jovian system is still ongoing and Callisto will likely be captured into the resonance in about 1.5 billion years, creating a 1:2:4:8 chain. Visibility All four Galilean moons are bright enough to be viewed from Earth without a telescope, if only they could appear farther away from Jupiter. (They are, however, easily distinguished with even low-powered binoculars.) They have apparent magnitudes between 4.6 and 5.6 when Jupiter is in opposition with the Sun, and are about one unit of magnitude dimmer when Jupiter is in conjunction. The main difficulty in observing the moons from Earth is their proximity to Jupiter, since they are obscured by its brightness. The maximum angular separations of the moons are between 2 and 10 arcminutes from Jupiter, which is close to the limit of human visual acuity. Ganymede and Callisto, at their maximum separation, are the likeliest targets for potential naked-eye observation. Orbit animations GIF animations depicting the Galilean moon orbits and the resonance of Io, Europa, and Ganymede See also Jupiter's moons in fiction Colonization of the Jovian System Notes References External links Sky & Telescope utility for identifying Galilean moons Interactive 3D visualisation of Jupiter and the Galilean moons NASA's Stunning Discoveries on Jupiter's Largest Moons | Our Solar System's Moons A Beginner's Guide to Jupiter's Moons Dominic Ford: The Moons of Jupiter. With a chart of the current position of the Galilean moons. Copernican Revolution Moons of Jupiter Moons with a prograde orbit Solar System
Galilean moons
Astronomy
4,593
1,072,006
https://en.wikipedia.org/wiki/Median%20%28geometry%29
In geometry, a median of a triangle is a line segment joining a vertex to the midpoint of the opposite side, thus bisecting that side. Every triangle has exactly three medians, one from each vertex, and they all intersect at the triangle's centroid. In the case of isosceles and equilateral triangles, a median bisects any angle at a vertex whose two adjacent sides are equal in length. The concept of a median extends to tetrahedra. Relation to center of mass Each median of a triangle passes through the triangle's centroid, which is the center of mass of an infinitely thin object of uniform density coinciding with the triangle. Thus, the object would balance at the intersection point of the medians. The centroid is twice as close along any median to the side that the median intersects as it is to the vertex it emanates from. Equal-area division Each median divides the area of the triangle in half, hence the name, and hence a triangular object of uniform density would balance on any median. (Any other lines that divide triangle's area into two equal parts do not pass through the centroid.) The three medians divide the triangle into six smaller triangles of equal area. Proof of equal-area property Consider a triangle ABC. Let D be the midpoint of , E be the midpoint of , F be the midpoint of , and O be the centroid (most commonly denoted G). By definition, . Thus and , where represents the area of triangle ; these hold because in each case the two triangles have bases of equal length and share a common altitude from the (extended) base, and a triangle's area equals one-half its base times its height. We have: Thus, and Since , therefore, . Using the same method, one can show that . Three congruent triangles In 2014 Lee Sallows discovered the following theorem: The medians of any triangle dissect it into six equal area smaller triangles as in the figure above where three adjacent pairs of triangles meet at the midpoints D, E and F. If the two triangles in each such pair are rotated about their common midpoint until they meet so as to share a common side, then the three new triangles formed by the union of each pair are congruent. Formulas involving the medians' lengths The lengths of the medians can be obtained from Apollonius' theorem as: where and are the sides of the triangle with respective medians and from their midpoints. These formulas imply the relationships: Other properties Let ABC be a triangle, let G be its centroid, and let D, E, and F be the midpoints of BC, CA, and AB, respectively. For any point P in the plane of ABC then The centroid divides each median into parts in the ratio 2:1, with the centroid being twice as close to the midpoint of a side as it is to the opposite vertex. For any triangle with sides and medians The medians from sides of lengths and are perpendicular if and only if The medians of a right triangle with hypotenuse satisfy Any triangle's area T can be expressed in terms of its medians , and as follows. If their semi-sum is denoted by then Tetrahedron A tetrahedron is a three-dimensional object having four triangular faces. A line segment joining a vertex of a tetrahedron with the centroid of the opposite face is called a median of the tetrahedron. There are four medians, and they are all concurrent at the centroid of the tetrahedron. As in the two-dimensional case, the centroid of the tetrahedron is the center of mass. However contrary to the two-dimensional case the centroid divides the medians not in a 2:1 ratio but in a 3:1 ratio (Commandino's theorem). See also Angle bisector Altitude (triangle) Automedian triangle References External links The Medians at cut-the-knot Area of Median Triangle at cut-the-knot Medians of a triangle With interactive animation Constructing a median of a triangle with compass and straightedge animated demonstration Straight lines defined for a triangle Articles containing proofs
Median (geometry)
Mathematics
869
16,673,816
https://en.wikipedia.org/wiki/Representation%20ring
In mathematics, especially in the area of algebra known as representation theory, the representation ring (or Green ring after J. A. Green) of a group is a ring formed from all the (isomorphism classes of the) finite-dimensional linear representations of the group. Elements of the representation ring are sometimes called virtual representations. For a given group, the ring will depend on the base field of the representations. The case of complex coefficients is the most developed, but the case of algebraically closed fields of characteristic p where the Sylow p-subgroups are cyclic is also theoretically approachable. Formal definition Given a group G and a field F, the elements of its representation ring RF(G) are the formal differences of isomorphism classes of finite-dimensional F-representations of G. For the ring structure, addition is given by the direct sum of representations, and multiplication by their tensor product over F. When F is omitted from the notation, as in R(G), then F is implicitly taken to be the field of complex numbers. The representation ring of G is the Grothendieck ring of the category of finite-dimensional representations of G. Examples For the complex representations of the cyclic group of order n, the representation ring RC(Cn) is isomorphic to Z[X]/(Xn − 1), where X corresponds to the complex representation sending a generator of the group to a primitive nth root of unity. More generally, the complex representation ring of a finite abelian group may be identified with the group ring of the character group. For the rational representations of the cyclic group of order 3, the representation ring RQ(C3) is isomorphic to Z[X]/(X2 − X − 2), where X corresponds to the irreducible rational representation of dimension 2. For the modular representations of the cyclic group of order 3 over a field F of characteristic 3, the representation ring RF(C3) is isomorphic to Z[X,Y]/(X 2 − Y − 1, XY − 2Y,Y 2 − 3Y). The continuous representation ring R(S1) for the circle group is isomorphic to Z[X, X −1]. The ring of real representations is the subring of R(G) of elements fixed by the involution on R(G) given by X ↦ X −1. The ring RC(S3) for the symmetric group of degree three is isomorphic to Z[X,Y]/(XY − Y,X 2 − 1,Y 2 − X − Y − 1), where X is the alternating representation and Y the irreducible representation of S3. Characters Any representation defines a character χ:G → C. Such a function is constant on conjugacy classes of G, a so-called class function; denote the ring of class functions by C(G). If G is finite, the homomorphism R(G) → C(G) is injective, so that R(G) can be identified with a subring of C(G). For fields F whose characteristic divides the order of the group G, the homomorphism from RF(G) → C(G) defined by Brauer characters is no longer injective. For a compact connected group, R(G) is isomorphic to the subring of R(T) (where T is a maximal torus) consisting of those class functions that are invariant under the action of the Weyl group (Atiyah and Hirzebruch, 1961). For the general compact Lie group, see Segal (1968). λ-ring and Adams operations Given a representation of G and a natural number n, we can form the n-th exterior power of the representation, which is again a representation of G. This induces an operation λn : R(G) → R(G). With these operations, R(G) becomes a λ-ring. The Adams operations on the representation ring R(G) are maps Ψk characterised by their effect on characters χ: The operations Ψk are ring homomorphisms of R(G) to itself, and on representations ρ of dimension d where the Λiρ are the exterior powers of ρ and Nk is the k-th power sum expressed as a function of the d elementary symmetric functions of d variables. References . . Group theory Ring theory Finite groups Lie groups Representation theory of groups
Representation ring
Mathematics
921
41,078
https://en.wikipedia.org/wiki/Duty%20cycle
A duty cycle or power cycle is the fraction of one period in which a signal or system is active. Duty cycle is commonly expressed as a percentage or a ratio. A period is the time it takes for a signal to complete an on-and-off cycle. As a formula, a duty cycle (%) may be expressed as: Equally, a duty cycle (ratio) may be expressed as: where is the duty cycle, is the pulse width (pulse active time), and is the total period of the signal. Thus, a 60% duty cycle means the signal is on 60% of the time but off 40% of the time. The "on time" for a 60% duty cycle could be a fraction of a second, a day, or even a week, depending on the length of the period. Duty cycles can be used to describe the percent time of an active signal in an electrical device such as the power switch in a switching power supply or the firing of action potentials by a living system such as a neuron. Some publications use as the symbol for duty cycle. As a ratio, duty cycle is unitless and may be given as decimal fraction and percentage alike. An alternative term in use is duty factor. Applications Electrical and electronics In electronics, duty cycle is the percentage of the ratio of pulse duration, or pulse width (PW) to the total period (T) of the waveform. It is generally used to represent time duration of a pulse when it is high (1). In digital electronics, signals are used in rectangular waveform which are represented by logic 1 and logic 0. Logic 1 stands for presence of an electric pulse and 0 for absence of an electric pulse. For example, a signal (10101010) has 50% duty cycle, because the pulse remains high for 1/2 of the period or low for 1/2 of the period. Similarly, for pulse (10001000) the duty cycle will be 25% because the pulse remains high only for 1/4 of the period and remains low for 3/4 of the period. Electrical motors typically use less than a 100% duty cycle. For example, if a motor runs for one out of 100 seconds, or 1/100 of the time, then, its duty cycle is 1/100, or 1 percent. Pulse-width modulation (PWM) is used in a variety of electronic situations, such as power delivery and voltage regulation. In electronic music, music synthesizers vary the duty cycle of their audio-frequency oscillators to obtain a subtle effect on the tone colors. This technique is known as pulse-width modulation. In the printer / copier industry, the duty cycle specification refers to the rated throughput (that is, printed pages) of a device per month. In a welding power supply, the maximum duty cycle is defined as the percentage of time in a 10-minute period that it can be operated continuously before overheating. Biological systems The concept of duty cycles is also used to describe the activity of neurons and muscle fibers. In neural circuits for example, a duty cycle specifically refers to the proportion of a cycle period in which a neuron remains active. Generation One way to generate fairly accurate square wave signals with 1/n duty factor, where n is an integer, is to vary the duty cycle until the nth-harmonic is significantly suppressed. For audio-band signals, this can even be done "by ear"; for example, a -40 dB reduction in the 3rd harmonic corresponds to setting the duty factor to 1/3 with a precision of 1% and -60 dB reduction corresponds to a precision of 0.1%. Mark-space ratio Mark-space ratio, or mark-to-space ratio, is another term for the same concept, to describe the temporal relationship between two alternating periods of a waveform. However, whereas the duty cycle relates the duration of one period to the duration of the entire cycle, the mark-space ratio relates the durations of the two individual periods: where and are the durations of the two alternating periods. References Mechanical engineering Timing in electronic circuits Articles containing video clips
Duty cycle
Physics,Engineering
846
226,021
https://en.wikipedia.org/wiki/Dynamics%20%28music%29
In music, the dynamics of a piece are the variation in loudness between notes or phrases. Dynamics are indicated by specific musical notation, often in some detail. However, dynamics markings require interpretation by the performer depending on the musical context: a specific marking may correspond to a different volume between pieces or even sections of one piece. The execution of dynamics also extends beyond loudness to include changes in timbre and sometimes tempo rubato. Purpose and interpretation Dynamics are one of the expressive elements of music. Used effectively, dynamics help musicians sustain variety and interest in a musical performance, and communicate a particular emotional state or feeling. Dynamic markings are always relative. (piano - "soft") never indicates a precise level of loudness; it merely indicates that music in a passage so marked should be considerably quieter than (forte - "loud"). There are many factors affecting the interpretation of a dynamic marking. For instance, the middle of a musical phrase will normally be played louder than the beginning or end, to ensure the phrase is properly shaped, even where a passage is marked throughout. Similarly, in multi-part music, some voices will naturally be played louder than others, for instance, to emphasize the melody and the bass line, even if a whole passage is marked at one dynamic level. Some instruments are naturally louder than others – for instance, a tuba playing mezzo-piano will likely be louder than a guitar playing forte, while a high-pitched instrument like the piccolo playing in its upper register can sound loud even when its actual decibel level is lower than that of other instruments. Dynamic markings The two basic dynamic indications in music are: or piano, meaning "soft or quiet". or forte, meaning "loud or strong". More subtle degrees of loudness or softness are indicated by: , standing for mezzo-piano, meaning "moderately quiet". , standing for mezzo-forte, meaning "moderately loud". , standing for più piano and meaning "quieter". , standing for più forte and meaning "louder". Use of up to three consecutive s or s is also common: , standing for pianissimo and meaning "very quiet". , standing for fortissimo and meaning "very loud". ("triple piano"), standing for pianississimo or piano pianissimo and meaning "very very quiet". ("triple forte"), standing for fortississimo or forte fortissimo and meaning "very very loud". There are additional special markings that are not very common: or , standing for sforzando and meaning "suddenly very loud", which only applies to a given beat. or , standing for rinforzando and meaning "reinforced", which refers to a sudden increase in volume that only applies to a given phrase. or , standing for niente and meaning "nothing", which refers to silence; it is generally used in combination with other markings for special effect. Changes Three Italian words are used to show gradual changes in volume: crescendo (abbreviated cresc.) translates as "increasing" (literally "growing") decrescendo (abbreviated to decresc.) translates as "decreasing". diminuendo (abbreviated dim.) translates as "diminishing". Dynamic changes can be indicated by angled symbols. A crescendo symbol consists of two lines that open to the right (); a decrescendo symbol starts open on the left and closes toward the right (). These symbols are sometimes referred to as hairpins or wedges. The following notation indicates music starting moderately strong, then becoming gradually stronger and then gradually quieter: Hairpins are typically positioned below the staff (or between the two staves in a grand staff), though they may appear above, especially in vocal music or when a single performer plays multiple melody lines. They denote dynamic changes over a short duration (up to a few bars), whereas cresc., decresc., and dim. signify more gradual changes. Word directions can be extended with dashes to indicate the temporal span of the change, which can extend across multiple pages. The term morendo ("dying") may also denote a gradual reduction in both dynamics and tempo. For pronounced dynamic shifts, cresc. molto and dim. molto are commonly used, with molto meaning "much". Conversely, poco cresc. and poco dim. indicate gentler changes, with "poco" translating to a little, or alternatively poco a poco meaning "little by little". Sudden dynamic changes are often indicated by prefixing or suffixing subito (meaning "suddenly") to the new dynamic notation. Subito piano (abbreviated as or ) ("suddenly soft") implies a quick, almost abrupt reduction in volume to around the range, often employed to subvert listener expectations, signaling a more intimate expression. Likewise, subito can mark sudden increases in volume, as in or ) ("suddenly loud"). Accented notes are generally marked with an accent sign > placed above or below the note, emphasizing the attack relative to the prevailing dynamics. A sharper and briefer emphasis is denoted with a marcato mark ^ above the note. If a specific emphasis is required, variations of forzando/forzato, or fortepiano can be used. forzando/forzato signifies a forceful accent, abbreviated as . To enhance the effect, subito often precedes it as (subito forzato/forzando, sforzando/sforzato). The interpretation and execution of these markings are at the performer's discretion, with forzato/forzando typically seen as a variation of marcato and subito forzando/forzato as a marcato with added tenuto. The fortepiano notation denotes a forte followed immediately by piano. Contrastingly, abbreviates poco forte, translating to "a little loud", but according to Brahms, implies a forte character with a piano sound, although rarely used due to potential confusion with pianoforte. Messa di voce is a singing technique and musical ornament on a single pitch while executing a crescendo and diminuendo. Extreme dynamic markings While the typical range of dynamic markings is from to , some pieces use additional markings of further emphasis. Extreme dynamic markings imply either a very large dynamic range or very small differences of loudness within a normal range. This kind of usage is most common in orchestral works from the late 19th century onward. Generally, these markings are supported by the orchestration of the work, with heavy forte passages brought to life by having many loud instruments like brass and percussion playing at once. In Holst's The Planets, occurs twice in "Mars" and once in "Uranus", often punctuated by organ. In Stravinsky's The Firebird Suite, is marked for the strings and woodwinds at the end of the Finale. Tchaikovsky marks a bassoon solo (6 s) in his Pathétique Symphony and uses in passages of his 1812 Overture and his Fifth Symphony. The baritone passage "Era la notte" from Verdi's opera Otello uses , though the same spot is marked in the full score. Sergei Rachmaninoff uses in his Prelude in C, Op. 3 No. 2. Gustav Mahler, in the third movement of his Seventh Symphony, gives the celli and basses a marking of (5 s), along with a footnote directing 'pluck so hard that the strings hit the wood'. On the other extreme, Carl Nielsen, in the second movement of his Fifth Symphony, marked a passage for woodwinds a diminuendo to (5 s).΄ Brian Ferneyhough, in his Lemma-Icon-Epigram, uses (6 s). Giuseppe Verdi, in Scene 5 (Act II from his opera Otello), uses (7 s). György Ligeti uses extreme dynamics in his music: the Cello Concerto begins with a passage marked (8 s), in his Piano Études Étude No. 9 (Vertige) ends with a diminuendo to (8 s), while Étude No. 13 (L'Escalier du Diable) contains a passage marked (6 s) that progresses to a (8 s) and his opera Le Grand Macabre has (10 s) with a stroke of the hammer. History On Music, one of the Moralia attributed to the philosopher Plutarch in the first century AD, suggests that ancient Greek musical performance included dynamic transitions – though dynamics receive far less attention in the text than does rhythm or harmony. The Renaissance composer Giovanni Gabrieli was one of the first to indicate dynamics in music notation. However, much of the use of dynamics in early Baroque music remained implicit and was achieved through a practice called raddoppio ("doubling") and later ripieno ("filling"), which consisted of creating a contrast between a small number of elements and then a larger number of elements (usually in a ratio of 2:1 or more) to increase the mass of sound. This practice was pivotal to the structuring of instrumental forms such as the concerto grosso and the solo concerto, where a few or one instrument, supported by harmonic basso continuo instruments (organ, lute, theorbo, harpsichord, lirone, and low register strings, such as cello or viola da gamba, often used together) variously alternate or join to create greater contrasts. This practice is usually called terraced dynamics, i.e. the alternation of piano and forte. Later baroque musicians, such as Antonio Vivaldi, tended to use more varied dynamics. J.S. Bach used some dynamic terms, including forte, piano, più piano, and pianissimo (although written out as full words), and in some cases it may be that was considered to mean pianissimo in this period. In 1752, Johann Joachim Quantz wrote that "Light and shade must be constantly introduced ... by the incessant interchange of loud and soft." In addition to this, the harpsichord in fact becomes louder or softer depending on the thickness of the musical texture (four notes are louder than two). In the Romantic period, composers greatly expanded the vocabulary for describing dynamic changes in their scores. Where Haydn and Mozart specified six levels ( to ), Beethoven used also and (the latter less frequently), and Brahms used a range of terms to describe the dynamics he wanted. In the slow movement of Brahms's trio for violin, horn and piano (Opus 40), he uses the expressions , molto piano, and quasi niente to express different qualities of quiet. Many Romantic and later composers added and , making for a total of ten levels between and . An example of how effective contrasting dynamics can be may be found in the overture to Smetana’s opera The Bartered Bride. The fast scurrying quavers played pianissimo by the second violins form a sharply differentiated background to the incisive thematic statement played fortissimo by the firsts. Interpretation by notation programs In some music notation programs, there are default MIDI key velocity values associated with these indications, but more sophisticated programs allow users to change these as needed. These defaults are listed in the following table for some applications, including Apple's Logic Pro 9 (2009–2013), Avid's Sibelius 5 (2007–2009), musescore.org's MuseScore 3.0 (2019), MakeMusic's Finale 26 (2018-2021), and Musitek's SmartScore X2 Pro (2016) and 64 Pro. (2021). MIDI specifies the range of key velocities as an integer between 0 and 127: The velocity effect on volume depends on the particular instrument. For instance, a grand piano has a much greater volume range than a recorder. Relation to audio dynamics The introduction of modern recording techniques has provided alternative ways to control the dynamics of music. Dynamic range compression is used to control the dynamic range of a recording, or a single instrument. This can affect loudness variations, both at the micro- and macro scale. In many contexts, the meaning of the term dynamics is therefore not immediately clear. To distinguish between the different aspects of dynamics, the term performed dynamics can be used to refer to the aspects of music dynamics that is controlled exclusively by the performer. See also Accent (music) Glossary of musical terminology Notes References Musical notation Musical terminology Elements of music
Dynamics (music)
Technology
2,608
66,960,200
https://en.wikipedia.org/wiki/De%C4%9Firmentepe
Değirmentepe or Değirmentepe Hüyük is an archaeological site which is located at 50 km north of the river Euphrates and at 24 km in the northeast of Malatya province in eastern Anatolia. It is now submerged in the reservoir area of the Karakaya and Atatürk dams. Rescue excavations were undertaken in under the supervision of Ufuk Esin of Istanbul University and interrupted in by flooding of the dams. Four archaeological layers whose dates are determined by techniques such as C14 and traces of fusion have been discovered in this mound: Middle Ages (late Roman-Byzantine period) Iron Age (1000 BCE) Bronze Age ancient I (Karaz or Khirbet Kerak culture, end of 4th millennium-beginning of 3rd millennium BCE) Chalcolithic Age (Ubaid period, second half of 5th millennium BCE.) The Chalcolithic Değirmentepe level of Ubaid-4 of the second half of the Vth millennium BCE, of which the sites of Tülintepe, Seyh Hüyük, and Kurban Hüyük are contemporary, contain skeletons of adolescents with skull deformed. The remains of this cultural phase belonging to the Chalcolithic are relatively well preserved. However, serious damage caused by occasional flooding of the Euphrates did occur, especially on architectural structures and the cemetery. Cranial deformities are not observed on human remains discovered and identified in Iron Age periods and medieval levels from Değrentepe. The Chalcolithic period of this ancient village is characterized by rectangular mud brick houses that communicate with each other. We see the appearance of domestic animals such as dogs, sheep, goats, pigs, and Bovinae than at the beginning of the Chalcolithic. barley, wheat, oats, and peas were the most commonly cultivated plants. Many ceramics characteristic of Ubaid culture have been found at the site. Archaeologists have discovered 450 sealings there which indicate intensive commercial activities, and production management. Metallurgy Strong evidence of metallurgical activities has been revealed in levels 9 to 6, dating to the Ubaid period, and especially in level 7 (4166 +/- 170 cal BC). Hearths or natural draft furnaces, slag, ore, and pigment had been recovered throughout the site. This was in the context of architectural complexes typical of southern Mesopotamian architecture. Unusually, the metallurgical activities at the site appear to have been limited to the melting and casting of copper objects. Arsenical copper objects were clearly manufactured on-site, yet the technological aspects of these productions remain unclear. This is because the primary smelting of ore seems to have been undertaken elsewhere, perhaps already at the mining sites. So questions remain as to whether or not arsenic was already present in the ores or added later. In contrast, the related Norşuntepe site provides a better context of production and demonstrates that some form of arsenic alloying was indeed taking place by the 4th millennium BC. Since the slag identified at Norşuntepe contains no arsenic, arsenic was added separately. References See also Aratashen Prehistory of the Levant Prehistory of Mesopotamia Archaeometallurgy Archaeological sites in Eastern Anatolia Archaeological sites of prehistoric Anatolia
Değirmentepe
Chemistry,Materials_science
667