id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
781,423
https://en.wikipedia.org/wiki/Desert%20planet
A desert planet, also known as a dry planet, an arid planet, or a dune planet, is a type of terrestrial planet with an arid surface consistency similar to Earth's deserts. Mars is a prominent example of a desert planet in the Solar System. History A 2011 study suggested that not only are life-sustaining desert planets possible, but that they might be more common than Earth-like planets. The study found that, when modeled, desert planets had a much larger habitable zone than ocean planets. The same study also speculated that Venus may have once been a habitable desert planet as recently as 1 billion years ago. It is also predicted that Earth will become a desert planet within a billion years due to the Sun's increasing luminosity. A study conducted in 2013 concluded that hot desert planets without runaway greenhouse effect can exist in 0.5 AU around Sun-like stars. In that study, it was concluded that a minimum humidity of 1% is needed to wash off carbon dioxide from the atmosphere, but too much water can act as a greenhouse gas itself. Higher atmospheric pressures increase the range in which the water can remain liquid. Science fiction The concept has become a common setting in science fiction, appearing as early as the 1956 film Forbidden Planet and Frank Herbert's 1965 novel Dune. The environment of the desert planet Arrakis (also known as Dune) in the Dune franchise drew inspiration from the Middle East, particularly the Arabian Peninsula and Persian Gulf, as well as Mexico. Dune in turn inspired the desert planets which prominently appear in the Star Wars franchise, including the planets Tatooine, Geonosis, and Jakku. See also Exoplanet Ocean world References Science fiction themes Hypothetical planet types Planet
Desert planet
Biology
350
21,106,847
https://en.wikipedia.org/wiki/Integrated%20operations
In the petroleum industry, Integrated operations (IO) refers to the integration of people, disciplines, organizations, work processes and information and communication technology to make smarter decisions. In short, IO is collaboration with focus on production. Contents of the term The most striking part of IO has been the use of always-on videoconference rooms between offshore platforms and land-based offices. This includes broadband connections for sharing of data and video-surveillance of the platform. This has made it possible to move some personnel onshore and use the existing human resources more efficiently. Instead of having e.g. an expert in geology on duty at every platform, the expert may be stationed on land and be available for consultation for several offshore platforms. It is also possible for a team at an office in a different time zone to be consulting the night-shift of the platform, so that no land-based workers need work at night. Splitting the team between land and sea demands new work processes, which together with ICT is the two main focus points for IO. Tools like videoconferencing and 3D-visualization also creates an opportunity for new, more cross-discipline cooperations. For instance, a shared 3D-visualization may be tailored to each member of the group, so that the geologist gets a visualization of the geological structures while the drilling engineer focuses on visualizing the well. Here, real-time measurements from the well are important but the downhole bandwidth has previously been very restricted. Improvements in bandwidth, better measurement devices, better aggregation and visualization of this information and improved models that simulate the rock formations and wellbore currently all feed on each other. An important task where all these improvements play together is real-time production optimization. In the process industry in general, the term is used to describe the increased cooperation, independent of location, between operators, maintenance personnel, electricians, production management as well as business management and suppliers to provide a more streamlined plant operation. By deploying IO, the petroleum industry draws on lessons from the process industry. This can be seen in a larger focus on the whole production chain and management ideas imported from the production and process industry. A prominent idea in this regard is real-time optimization of the whole value chain, from long term management of the oil reservoir, through capacity allocations in pipe networks and calculations of the net present value of the produced oil. Reviews of the application of Integrated Operations can be found in papers presented in the by-annual society of petroleum engineers Intelligent Energy conferences. A focus on the whole production chain is also seen in debates about how to organize people in an IO organisation, with frequent calls for breaking down the Information silos in the oil companies. A large oil company is typically organized in functional silos corresponding to disciplines such as drilling, production and reservoir management. This is regarded as inefficient by the IO movement, pointing out that the activities in any well or field by any of the silos will involve or affect all of the others. While some companies focus on their inhouse management structure, others also emphasize the integration and coordination of outside suppliers and collaborators in offshore-operations. For instance, it is pointed out that the oil and gas industry is lagging behind other industries in terms of Operational intelligence. Ideas and theories that IO management and work processes build on will be familiar from operations research, knowledge management and continual improvement as well as information systems and business transformation. This is perhaps most evident in the repeated referral to "people, process and technology" in IO discussions . As bullet-points this mirror many of the aforementioned fields. Since 2010 major mining companies have become implementers of Integrated Operations, most notable Rio Tinto, BHP Biliton and Codelco. Incentives Common to most companies is that IO leads to cost savings as fewer people are stationed offshore and an increased efficiency. Lower costs, more efficient reservoir management and fewer mistakes during well drilling will in turn raise profits and make more oil fields economically viable. IO comes at a time when the oil industry is faced with more "brown fields", also referred to as "tail production", where the cost of extracting the oil will be higher than its market value, unless major improvements in technology and work processes are made. It has been estimated that deployment of IO could produce 300 billion NOK of added value to the Norwegian continental shelf alone. On a longer time-scale, working onshore control and monitoring of the oil production may become a necessity as new fields at deeper waters are based purely on unmanned sub-sea facilities. Moving jobs onshore has also been touted as a way to keep and make better use of an aging workforce, which is regarded as a challenge by western oil and gas companies. As the average age of the industry workforce is increasing with many nearing retirement, IO is being leveraged for knowledge sharing and training of younger workforce. More comfortable onshore jobs together with "high-tech" tools has also been fronted as a way to recruit young workers into an industry that is seen as "unsexy", "lowtech" and difficult to combine with a normal family life. Critique The security aspect of reducing the offshore workforce has been raised. Will on-site experience be lost and can familiarity with the platform and its processes be attained from an onshore office? The new working environment in any case demands changes to HSE routines. Some of the challenges also include clear role and responsibility definitions and clarifications between the onshore & offshore personnel. Who in a given situation has the authority to take decisions, the onsite or the offshore staff. The increased integration of the offshore facilities with the onshore office environment and outside collaborators also expose work-critical ICT-infrastructure to the internet and the hazards of everyday ICT. As for the efficiency aspect, some criticize the onshore-offshore collaboration for creating a more bureaucratic working environment. Naming conventions Both the exact terms and the content used to describe IO vary between companies. The oil company Shell has traditionally branded the term Smart Fields, which was an extension of Smart Wells that only referred to remote-controlled well-valves. BP uses Field of the future to refer to its innovations in oil production. Chevron has i-field, Honeywell has Digital Suites for Oil and Gas (a set of software and services), and Schlumberger terms it Digital Energy. The latter term, understood as referring to oil and gas, is adopted in the title of the digital energy journal. This term could have several meanings, as GE Digital Energy for instance, do not appear to use it in the IO sense. Other terms include e-Field, i-Field, Digital Oilfield, Intelligent Oilfield, Field of the future and Intelligent Energy. Integrated operations has been the preferred term by Statoil, the Norwegian Oil Industry Association (OLF), a professional body and employer's association for oil and supplier companies and vendors such as ABB. IO is also the preferred term for Petrobras. Intelligent Energy is the dominant term in publications revolving around the biannual SPE Intelligent Energy conference, which has been one of the major conferences for the IO movement, along with the annual IO Science and Practice conference which obviously supports the IO term. See also Integrated asset modelling a holistic modelling approach in oil and gas, connecting models across disciplines https://stepchangeglobal.com/ Integrated Operations in the High North, a collaboration project working on the next or second generation of Integrated Operations. ISO 15926, an enabler for the next or second generation of Integrated Operations by integrating data across disciplines and business domains. WITSML an example of a standardisation effort for real-time drilling data which facilitate integration of disparate computer systems Definition of IO by Global IO References External links Integrated Operations on the StatoilHydro corporate website. Integrated Operations Center for Integrated Operations in the Petroleum Industry Stepchange Global - independent Integrated Operations Advisory Services http://www.stepchangeglobal.com/ Global IO - independent Integrated Operations Management Consultancy Services Petroleum industry Petroleum engineering Petroleum production
Integrated operations
Chemistry,Engineering
1,634
61,016,049
https://en.wikipedia.org/wiki/Testosterone%20propionate/testosterone%20ketolaurate
Testosterone propionate/testosterone ketolaurate (TP/TKL), sold under the brand name Testosid-Depot, is an injectable combination medication of testosterone propionate (TP), an androgen/anabolic steroid, and testosterone ketolaurate (TKL; testosterone caprinoylacetate), an androgen/anabolic steroid. It contains 25 mg TP and 150 to 300 mg TKL in oil solution and is administered by intramuscular injection at regular intervals. The medication has been reported to have a duration of action of about 14 to 20 days. See also List of combined sex-hormonal preparations § Androgens References Abandoned drugs Combined androgen formulations
Testosterone propionate/testosterone ketolaurate
Chemistry
156
31,430,040
https://en.wikipedia.org/wiki/Model-dependent%20realism
Model-dependent realism is a view of scientific inquiry that focuses on the role of scientific models of phenomena. It claims reality should be interpreted based upon these models, and where several models overlap in describing a particular subject, multiple, equally valid, realities exist. It claims that it is meaningless to talk about the "true reality" of a model as we can never be absolutely certain of anything. The only meaningful thing is the usefulness of the model. The term "model-dependent realism" was coined by Stephen Hawking and Leonard Mlodinow in their 2010 book, The Grand Design. Overview Model-dependent realism asserts that all we can know about "reality" consists of networks of world pictures that explain observations by connecting them by rules to concepts defined in models. Will an ultimate theory of everything be found? Hawking and Mlodinow suggest it is unclear: A world picture consists of the combination of a set of observations accompanied by a conceptual model and by rules connecting the model concepts to the observations. Different world pictures that describe particular data equally well all have equal claims to be valid. There is no requirement that a world picture be unique, or even that the data selected include all available observations. The universe of all observations at present is covered by a network of overlapping world pictures and, where overlap occurs; multiple, equally valid, world pictures exist. At present, science requires multiple models to encompass existing observations: Where several models are found for the same phenomena, no single model is preferable to the others within that domain of overlap. Model selection While not rejecting the idea of "reality-as-it-is-in-itself", model-dependent realism suggests that we cannot know "reality-as-it-is-in-itself", but only an approximation of it provided by the intermediary of models. The view of models in model-dependent realism also is related to the instrumentalist approach to modern science, that a concept or theory should be evaluated by how effectively it explains and predicts phenomena, as opposed to how accurately it describes objective reality (a matter possibly impossible to establish). A model is a good model if it: Is elegant Contains few arbitrary or adjustable elements Agrees with and explains all existing observations Makes detailed predictions about future observations that can disprove or falsify the model if they are not borne out. "If the modifications needed to accommodate new observations become too baroque, it signals the need for a new model." Of course, an assessment like that is subjective, as are the other criteria. According to Hawking and Mlodinow, even very successful models in use today do not satisfy all these criteria, which are aspirational in nature. See also All models are wrong Commensurability Conceptualist realism Constructivist epistemology Fallibilism Internal realism Instrumentalism Models of scientific inquiry Ontological pluralism Philosophical realism Pragmatism Scientific perspectivism Scientific realism Space mapping References Further reading An on-line excerpt stating Kuhn's criteria is found here and they also are discussed by External links Edwards, Chris. Stephen Hawking's other controversial theory: Model Dependent Realism in The Grand Design (critical essay), Skeptic (Altadena, CA), March 22, 2011 Philosophy of physics Metatheory of science Metaphysical realism Scientific modelling
Model-dependent realism
Physics
673
68,662,735
https://en.wikipedia.org/wiki/Marshall%20Building
The Marshall Building, formerly known as the Hoffman & Sons Co. Building, is a historic building in Milwaukee, Wisconsin, United States. Part of the Historic Third Ward, the six-story building is the oldest existing example of structural engineer Claude A. P. Turner's Spiral Mushroom System of flat-slab concrete reinforcement. History The building was originally constructed as a five story structure in 1906 for John Hoffman & Sons Company, a wholesale grocer specializing in the manufacturing of coffee, tea and spices. When the project was in its early stages of design, Milwaukee consulting engineer John Geist came across an article written by Claude A.P. Turner in Engineering News about a new flat-slab system that Turner had developed for the Johnson-Bovey Building in Minneapolis, Minnesota. He traveled to Minneapolis to observe the load tests for the building and asked Turner to design a similar type of flat-slab system for the building in Milwaukee. Turner's flat-slab system differed from previous reinforced concrete construction methods in that it consisted of only floor slabs and supporting columns, eliminating the need for beams below the floors. His Spiral Mushroom System, also known as the Turner System, referred to a cage of radial and tangential bars at the top of each column that imparted shear strength to the slab and also provided cantilever support. A sixth floor was to the structure in 1911 to accommodate the growth in Hoffman's business and was used to accommodate their coffee roasting equipment; however, beginning in the late 1920s the firm began to share space in the building with other manufacturers. The building was purchased in 1947 by developer George Bockl and was renamed the following year after his son Robert Marshall. Bockl sold the building in 1966 but later reacquired it in 1974. On March 8, 1984, the building became a contributing property of the Historic Third Ward District. In 2002, the Marshall Building was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers. Today the building is primarily used as office space and art galleries. References External links Official website Buildings and structures in Milwaukee Historic Civil Engineering Landmarks Historic district contributing properties in Wisconsin Industrial buildings completed in 1906 1906 establishments in Wisconsin
Marshall Building
Engineering
444
2,179,555
https://en.wikipedia.org/wiki/Airport%20problem
In mathematics and especially game theory, the airport problem is a type of fair division problem in which it is decided how to distribute the cost of an airport runway among different players who need runways of different lengths. The problem was introduced by S. C. Littlechild and G. Owen in 1973. Their proposed solution is: Divide the cost of providing the minimum level of required facility for the smallest type of aircraft equally among the number of landings of all aircraft Divide the incremental cost of providing the minimum level of required facility for the second smallest type of aircraft (above the cost of the smallest type) equally among the number of landings of all but the smallest type of aircraft. Continue thus until finally the incremental cost of the largest type of aircraft is divided equally among the number of landings made by the largest aircraft type. The authors note that the resulting set of landing charges is the Shapley value for an appropriately defined game. Introduction In an airport problem there is a finite population N and a nonnegative function C: N-R. For technical reasons it is assumed that the population is taken from the set of the natural numbers: players are identified with their 'ranking number'. The cost function satisfies the inequality C(i) <C(j)whenever i <j. It is typical for airport problems that the cost C(i)is assumed to be a part of the cost C(j) if i<j, i.e. a coalition S is confronted with costs c(S): =MAX C(i). In this way an airport problem generates an airport game (N,c). As the value of each one-person coalition (i) equals C(i), we can rediscover the airport problem from the airport game theory. Nash Equilibrium Nash equilibrium, also known as non-cooperative game equilibrium, is an essential term in game theory described by John Nash in 1951. In a game process, regardless of the opponent's strategy choice, one of the parties will choose a certain strategy, which is called dominant strategy. If any participant chooses the optimal strategy when the strategies of all other participants are determined, then this combination is defined as a Nash equilibrium. A game may include multiple Nash equilibrium or none. In addition, a combination of strategies is called the Nash balance. when each player's balance strategy is to achieve the maximum value of its expected return, at the same time, all other players also follow this strategy. Shapley value The Shapley value is a solution concept used in game theory. The Shapley value is mainly applicable to the following situation: the contribution of each actor is not equal, but each participant cooperates with each other to obtain profit or return. The efficiency of the resource allocation and combination of the two distribution methods are more reasonable and fair, and it also reflects the process of mutual game among the league members. However, the benefit distribution plan of the Shapley value method has not considered the risk sharing factors of organization members, which essentially implies the assumption of equal risk sharing. It is necessary to make appropriate amendments to the benefit distribution plan of Shapley value method according to the size of risk sharing. Example An airport needs to build a runway for 4 different aircraft types. The building cost associated with each aircraft is 8, 11, 13, 18 for aircraft A, B, C, D. We would come up with the following cost table based on Shapley value: See also Introduction video of confrontation analysis. List of games in game theory. References Fair division Cooperative games
Airport problem
Mathematics
727
12,066,363
https://en.wikipedia.org/wiki/Bergen%20Academy%20of%20Art%20and%20Design
Bergen Academy of Art and Design () or KHiB was one of two independent and accredited scientific institutions of higher learning in the visual arts and design in Norway (The other is Oslo National Academy of the Arts). It was located in Bergen, Norway. The education included the subject areas fine art, photography, printmaking, ceramics, textiles, visual communication, interior architecture and furniture design. The college had around 350 students. KHiB is now merged with The Grieg Academy - Department of Music, and together they make up the Faculty of Fine Art, Music and Design, as one of seven faculties at University of Bergen (UiB). The faculty was formally established on 1 January 2017, and has three departments: The Art Academy - Department of Art, The Grieg Academy - Department of Music and Department of Design. History Art education has long traditions in Bergen, as the first school of art was established there in 1772, modelled on the Academy of Art in Copenhagen. Bergen Academy of Art and Design was itself a young institution established in 1996, merged from two former institutions; Vestlandets kunstakademi which had been founded in 1972 and Statens høgskole for kunsthåndverk og design which is dated to 1909. The academy had facilities in six different buildings in Bergen centre; in Strømgaten 1, Marken 37, Vaskerelven 8, Kong Oscars gate 62 and C. Sundts. gate 53 and 55. Strømgaten 1, which previously housed Bergen Technical School, was protected by regulations on 25 June 2013. The building was designed by Giovanni Müller and built in the years 1874-76. In 2017, all facilities were co-located in Møllendalsveien in a new purpose-built building designed by Snøhetta. References External links Official site Art schools in Norway Industrial design Educational institutions established in 1996 Graphic design schools Universities and colleges in Norway Education in Bergen 1996 establishments in Norway
Bergen Academy of Art and Design
Engineering
405
10,612,482
https://en.wikipedia.org/wiki/Dimension%20Data
Dimension Data was a company specialising in information technology services. Based in Johannesburg, South Africa, the company maintained operations on every inhabited continent. Dimension Data focused on services including IT consulting, technical and support services, and managed services. The company was the official technology partner of the Tour de France, the Vuelta a España and also sponsored a team of the same name. In 2010, the company was fully acquired by Nippon Telegraph and Telephone (NTT). On 1 July 2019, all Dimension Data operations, excluding those in the Middle East and Africa, became part of NTT Ltd. History Dimension Data was founded in 1983 by Keith McLachlan, Werner Sievers, Jeremy Ord, Peter Neale, and Kevin Hamilton. The company was listed on the Johannesburg Stock Exchange on 15 July 1987. Jeremy Ord was appointed as the company's executive chairman in that same year (a position he continued to hold as of 2016). In 1991, the company became the official South African distributor for Cisco Systems. In 1993, the company expanded to Botswana, and, between 1995 and 1997, it began further expansion into the Asia Pacific region. In 1996, Dimension Data purchased a 45% stake in the Australian company, ComTech. It would later buyout the company fully in 2000. The company also bought a stake in the South African company, Internet Solutions, in 1996. Dimension Data would increase their stake in the company the following year. Also in 1997 the company purchased majority stakes in The Merchants Group and Datacraft. Between 1998 and 2000, the company focused its expansion efforts on the northern hemisphere, investing in and acquiring a variety of companies in Europe, the United Kingdom, and the United States. One such acquisition was the 1998 purchase of London-based telecommunications company, Plessey. On 31 July 2000, Dimension Data was listed on the London Stock Exchange raising $1.25 billion in the process. By 2003, the company's revenue had jumped to $2 billion. In 2004, Brett Dawson was appointed as CEO of the company and Jeremy Ord was appointed as Group Executive Chairman. Over the next 6 years, the company expanded in Africa, the Middle East, and South America. By September 2009, the company had an annual revenue of around $4 billion. In July 2010, Dimension Data was acquired by Nippon Telegraph and Telephone (NTT) for £2.1 billion ($3.2 billion). In October of that year, NTT announced that Dimension Data would be delisted from both the Johannesburg Stock Exchange and the London Stock Exchange by the end of the year. Over the next few years, Dimension Data continued acquiring and integrating businesses like OpSource, NextiraOne, and Oakton. In June 2016, Brett Dawson stepped down as CEO and was replaced by then COO, Jason Goodall. At the time, Dimension Data maintained 31,000 staff in 58 countries across all 6 inhabited continents. The annual revenue of the company was $7.5 billion. On 1 July 2019, the majority of Dimension Data became part of NTT Ltd., along with NTT Security and NTT Communications. However, the brand continues to operate in the Middle East & Africa with Grant Bodley as CEO. On 1 April 2021, Werner Kapp was appointed as CEO of the company, as Grant Bodley has resigned. In June 2022, NTT announced the appointment of Alan Turnley-Jones to succeed Kapp as CEO. In October 2023, it was announced that Dimension Data will become NTT Data, Inc. with effect from 1 April 2024. Alan Turnley-Jones will continue to lead the business in Africa and the Middle East. Acquisitions and subsidiaries Dimension Data's current subsidiaries include AccessKenya Group, AlwaysOn, ContinuitySA, e2y Commerce, Earthwave, Euricom, Internet Solutions, JQ Network, Merchants, Nexus IS, Oakton, Plessey, Security Assessment, SQL Services, Training Partners and Viiew. Some of its early acquisitions included Australian companies, ComTech and Internet Solutions, in 1996. Merchants and Datacraft were acquired a year later. In 2000, the company purchased Plessey in a joint venture with WorldWide African Investment Holdings. SQL Services was acquired in 2008. After Nippon Telegraph and Telephone purchased the company in 2010, Dimension Data began adding more subsidiaries to the fold. These include e2y Commerce, a digital experience, commerce and marketplace consultancy for SAP CX (Hybris) and Mirakl, Earthwave, an information and communications technology specialist based in Australia and courses on cybersecurity under the umbrella of Dimension Data in 2013, AccessKenya, and Nexus IS in 2014. The Nexus acquisition nearly doubled Dimension Data's presence in the United States. Other Dimension Data subsidiaries, like AlwaysOn and ContinuitySA, were first acquired by fellow subsidiary, Internet Solutions. Products and services Dimension Data provides information technology products and services, such as those for data centers, security, network integration and management and Microsoft support. The company is placing its primary focus on four service areas. These include: digital infrastructure, hybrid cloud, workspaces for tomorrow, and cybersecurity. In addition to offering the sale of physical data centers, it provides operation, management, transformation, and relocation of such data centers. Dimension Data also manages and operates servers and storage and provides backup services in case of damage or disaster. In 2016 in association with the Amaury Sports Organization, Dimension Data overhauled the digital infrastructure for the Tour de France. The company added big data upgrades that allowed for new technologies like data capture, race coordination, improved graphics, and analytics. Hybrid cloud Dimension Data provides both public and private cloud computing servers. They also offer a hybrid cloud in which customers can choose which resources are public or private. Data can be stored on a customer's own servers or Dimension Data's servers. The company operates locations including Johannesburg, Sydney, and London. Cybersecurity Dimension Data provides cybersecurity for businesses on an enterprise scale. This includes infrastructure security, governance and compliance, risk assessment, and a variety of other cybersecurity services. It also offers services for mobile security and data leakage prevention. Managed Networking and Collaboration Dimension Data offers platform-based managed networking and collaboration, which include: the design, implementation and proactive support SD-WAN, SASE, LAN, edge networking and cloud voice. With hardware tech/attach and underlay aggregation, Dimension Data offers MEA enterprise clients proactive management of their network, as a service, via its globally delivered SPEKTRA-powered technology. Recognition In March 2013, Dimension Data was named as a Leader in the Green Quadrant Sustainable Technology Services report by Verdantix. Dimension Data was named along with seven other companies as having a dedicated sustainability practice, a wide range of capabilities, and a strong track record in corporate sustainability. In April 2015, Dimension Data was honored with three separate Cisco Partner of the Year awards in three different regions. The following year, the company won the 2016 Microsoft Country Partner of the Year Award for Rwanda and Tanzania regions along with the Communications Partner of the Year and Cloud Productivity Partner of the Year in the Global arena. Cycling On 3 May 2015, the Amaury Sports Organisation (ASO) signed a five-year deal with Dimension Data for Dimension Data to be the official technology partner on cycling events. As part of this deal, Dimension Data provides telemetrics including GPS positioning and speed in real time. In September 2015, the company also became the sponsor of the former MTN-Qhubeka becoming Team Dimension Data for Qhubeka. References Information technology companies of South Africa Companies established in 1983 Companies formerly listed on the London Stock Exchange Cloud platforms Cloud computing providers Companies based in Johannesburg International information technology consulting firms Outsourcing companies International management consulting firms Nippon Telegraph and Telephone
Dimension Data
Technology
1,593
47,262,026
https://en.wikipedia.org/wiki/Ultra-high-definition%20television
Ultra-high-definition television (also known as Ultra HD television, Ultra HD, UHDTV, UHD and Super Hi-Vision) today includes 4K UHD and 8K UHD, which are two digital video formats with an aspect ratio of 16:9. These were first proposed by NHK Science & Technology Research Laboratories and later defined and approved by the International Telecommunication Union (ITU). The Consumer Electronics Association announced on October 17, 2012, that "Ultra High Definition", or "Ultra HD", would be used for displays that have an aspect ratio of 16:9 or wider and at least one digital input capable of carrying and presenting native video at a minimum resolution of . In 2015, the Ultra HD Forum was created to bring together the end-to-end video production ecosystem to ensure interoperability and produce industry guidelines so that adoption of ultra-high-definition television could accelerate. From just 30 in Q3 2015, the forum published a list up to 55 commercial services available around the world offering 4K resolution. The "UHD Alliance", an industry consortium of content creators, distributors, and hardware manufacturers, announced during a Consumer Electronics Show (CES) 2016 press conference its "Ultra HD Premium" specification, which defines resolution, bit depth, color gamut, high dynamic range (HDR) performance required for Ultra HD (UHDTV) content and displays to carry their Ultra HD Premium logo. Alternative terms Ultra-high-definition television is also known as Ultra HD, UHD, UHDTV, and 4K. In Japan, 8K UHDTV will be known as Super Hi-Vision since Hi-Vision was the term used in Japan for HDTV. In the consumer electronics market companies had previously only used the term 4K at the 2012 CES but that had changed to "Ultra HD" during CES 2013. "Ultra HD" was selected by the Consumer Electronics Association after extensive consumer research, as the term has also been established with the introduction of "Ultra HD Blu-ray". Technical details Resolution Two resolutions are defined as UHDTV: UHDTV-1 is 3840 pixels wide by 2160 pixels tall (8.3 megapixels), which is four times as many pixels as the (2.07 megapixels) of current 1080p HDTV (full HDTV). Also known as 2160p, and 4K UHD. Although roughly similar in resolution to 4K digital cinema formats, it should not be confused with other 4K resolutions such as the DCI 4K/Cinema 4K. The total number of pixels of RGB stripe type is 8.3 megapixels. UHDTV-2 is 7680 pixels wide by 4320 pixels tall (33.18 megapixels), also referred to as 4320p and 8K UHD, which is sixteen times as many pixels as current 1080p HDTV, which brings it closer to the detail level of 15/70 mm IMAX. NHK advertises the 8K UHDTV format with 22.2 surround sound as Super Hi-Vision, which can be broadcast with H.264 codecs. Color space, dynamic range, frame rate and resolution/aliasing The human visual system has a limited ability to discern improvements in resolution when picture elements are already small enough or distant enough from the viewer. At some home viewing distances and current TV sizes, HD resolution is near the limits of resolution for the eye and increasing resolution to 4K has little perceptual impact, if consumers are beyond the critical distance (Lechner distance) to appreciate the differences in pixel count between 4K and HD. One exception is that even if resolution surpasses the resolving ability of the human eye, there is still an improvement in the way the image appears due to higher resolutions reducing spatial aliasing. UHDTV provides other image enhancements in addition to pixel density. Specifically, dynamic range and color are greatly enhanced, and these impact saturation and contrast differences that are readily resolved greatly improve the experience of 4KTV compared to HDTV. UHDTV allows the future use of the new Rec. 2020 (UHDTV) color space which can reproduce colors that cannot be shown with the current Rec. 709 (HDTV) color space. In terms of CIE 1931 color space, the Rec. 2020 color space covers 75.8%, compared to coverage by the DCI-P3 digital cinema reference projector color space of just 53.6%, 52.1% by Adobe RGB color space, while the Rec. 709 color space covers only 35.9%. UHDTV's increases in dynamic range allow not only brighter highlights but also increased detail in the greyscale. UHDTV also allows for frame rates up to 120 frames per second (fps). UHDTV potentially allows Rec. 2020, higher dynamic range, and higher frame rates to work on HD services without increasing resolution to 4K, providing improved quality without as high of an increase in bandwidth demand. History 2001–2005 The first displays capable of displaying 4K content appeared in 2001, as the IBM T220/T221 LCD monitors. NHK researchers built a UHDTV prototype which they demonstrated in 2003. They used an array of 16 HDTV recorders with a total capacity of almost 3.5TB that could capture up to 18 minutes of test footage. The camera itself was built with four CCDs, each with a resolution of only . Using two CCDs for green and one each for red and blue, they then used a spatial pixel offset method to bring it to . Subsequently, an improved and more compact system was built using CMOS image sensor technology and the CMOS image sensor system was demonstrated at Expo 2005, Aichi, Japan, the NAB 2006 and NAB 2007 conferences, Las Vegas, at IBC 2006 and IBC 2008, Amsterdam, Netherlands, and CES 2009. A review of the NAB 2006 demo was published in a broadcast engineering e-newsletter. Individuals at NHK and elsewhere projected that the timeframe for UHDTV to be available in domestic homes varied between 2015 and 2020 but Japan was to get it in the 2016 time frame. 2006–2010 On November 2, 2006, NHK demonstrated a live relay of a UHDTV program over a 260 kilometer distance by a fiber-optic network. Using dense wavelength division multiplex (DWDM), 24Gbit/s speed was achieved with a total of 16 different wavelength signals. On December 31, 2006, NHK demonstrated a live relay of their annual Kōhaku Uta Gassen over IP from Tokyo to a screen in Osaka. Using a codec developed by NHK, the video was compressed from 24Gbit/s to 180600Mbit/s and the audio was compressed from 28Mbit/s to 728Mbit/s. Uncompressed, a 20-minute broadcast would require roughly 4 TB of storage. The SMPTE first released Standard 2036 for UHDTV in 2007. UHDTV was defined as having two levels, called UHDTV1 () and UHDTV2 (). In May 2007, the NHK did an indoor demonstration at the NHK Open House in which a UHDTV signal ( at 60fps) was compressed to a 250Mbit/s MPEG2 stream. The signal was input to a 300MHz wide band modulator and broadcast using a 500MHz QPSK modulation. This "on the air" transmission had a very limited range (less than 2 meters), but shows the feasibility of a satellite transmission in the 36,000km orbit. In 2008, Aptina Imaging announced the introduction of a new CMOS image sensor specifically designed for the NHK UHDTV project. During IBC 2008 Japan's NHK, Italy's RAI, BSkyB, Sony, Samsung, Panasonic Corporation, Sharp Corporation, and Toshiba (with various partners) demonstrated the first ever public live transmission of UHDTV, from London to the conference site in Amsterdam. On June 9, 2010, Panasonic announced that its professional plasma display lineup would include an plasma display with 4K resolution. At the time of announcement, it was the largest 4K display and the largest television. On September 29, 2010, the NHK partnered up and recorded The Charlatans live in the UK in the UHDTV format, before broadcasting over the internet to Japan. 2011 On May 19, 2011, Sharp in collaboration with NHK demonstrated a direct-view LCD display capable of pixels at 10 bits per channel. It was the first direct-view Super Hi-Vision-compatible display released. Before 2011, UHDTV allowed for frame rates of 24, 25, 50, and 60fps. In an ITU-R meeting during 2011, an additional frame rate was added to UHDTV of 120fps. 2012 On February 23, 2012, NHK announced that with Shizuoka University they had developed an 8K sensor that can shoot video at 120fps. In April 2012, Panasonic, in collaboration with NHK announced a display ( at 60fps), which has 33.2 million 0.417mm square pixels. In April 2012, the four major South Korean terrestrial broadcasters (KBS, MBC, SBS, and EBS) announced that in the future, they would begin test broadcasts of UHDTV on channel 66 in Seoul. At the time of the announcement, the UHDTV technical details had not yet been decided. LG Electronics and Samsung are also involved in UHDTV test broadcasts. In May 2012, NHK showed the world's first ultra-high-definition shoulder-mount camera. By reducing the size and weight of the camera, the portability had been improved, making it more maneuverable than previous prototypes, so it could be used in a wide variety of shooting situations. The single-chip sensor uses a Bayer color-filter array, where only one color component is acquired per pixel. Researchers at NHK also developed a high-quality up-converter, which estimates the other two color components to convert the output into full resolution video. Also in May 2012, NHK showed the ultra-high-definition imaging system it has developed in conjunction with Shizuoka University, which outputs 33.2-megapixel video at 120fps with a color depth of 12bits per component. As ultra-high-definition broadcasts at full resolution are designed for large, wall-sized displays, there is a possibility that fast-moving subjects may not be clear when shot at 60fps, so the option of 120fps has been standardized for these situations. To handle the sensor output of approximately 4 billion pixels per second with a data rate as high as 51.2Gbit/s, a faster analog-to-digital converter has been developed to process the data from the pixels, and then a high-speed output circuit distributes the resulting digital signals into 96 parallel channels. This CMOS sensor is smaller and uses less power when compared to conventional ultra-high-definition sensors, and it is also the world's first to support the full specifications of the ultra-high-definition standard. During the 2012 Summer Olympics in Great Britain, the format was publicly showcased by the world's largest broadcaster, the BBC, which set up 15-meter-wide screens in London, Glasgow, and Bradford to allow viewers to see the Games in ultra-high definition. On May 31, 2012, Sony released the VPL-VW1000ES 4K 3D Projector, the world's first consumer-prosumer projector using the 4K UHDTV system, with the shutter-glasses stereoscopic 3D technology priced at US$24,999.99. On August 22, 2012, LG announced the world's first 3D UHDTV using the 4K system. On August 23, 2012, UHDTV was officially approved as a standard by the International Telecommunication Union (ITU), standardizing both 4K and 8K resolutions for the format in ITU-R Recommendation BT.2020. On September 15, 2012, David Wood, Deputy Director of the EBU Technology and Development Department (who chairs the ITU working group that created Rec. 2020), told The Hollywood Reporter that South Korea plans to begin test broadcasts of 4K UHDTV next year. Wood also said that many broadcasters have the opinion that going from HDTV to 8K UHDTV is too much of a leap and that it would be better to start with 4K UHDTV. In the same article, Masakazu Iwaki, NHK Research senior manager, said that the NHK plan to go with 8K UHDTV is for economic reasons since directly going to 8K UHDTV would avoid an additional transition from 4K UHDTV to 8K UHDTV. On October 18, 2012, the Consumer Electronics Association (CEA) announced that it had been unanimously agreed by the CEA's Board of Industry Leaders that the term "Ultra High-Definition", or "Ultra HD", would be used for displays that have a resolution of at least 8 megapixels with a vertical resolution of at least 2,160 pixels and a horizontal resolution of at least 3,840 pixels. The Ultra HD label also requires the display to have an aspect ratio of 16:9 or wider and to have at least one digital input that can carry and present a native video signal of without having to rely on a video scaler. Sony announced they would market their 4K products as 4K Ultra High-Definition (4K UHD). On October 23, 2012, Ortus Technology Co., Ltd announced the development of the world's smallest pixel LCD panel with a size of and a pixel density of 458px/in. The LCD panel is designed for medical equipment and professional video equipment. On October 25, 2012, LG Electronics began selling the first flat panel Ultra HD display in the United States with a resolution of . The LG 84LM9600 is an flat panel LED-backlit LCD display with a price of US$19,999 though the retail store was selling it for US$16,999. On November 29, 2012, Sony announced the 4K Ultra HD Video Player—a hard disk server preloaded with ten 4K movies and several 4K video clips that they planned to include with the Sony XBR-84X900. The preloaded 4K movies are The Amazing Spider-Man, Total Recall (2012), The Karate Kid (2010), Salt, Battle: Los Angeles, The Other Guys, Bad Teacher, That's My Boy, Taxi Driver, and The Bridge on the River Kwai. Additional 4K movies and 4K video clips will be offered for the 4K Ultra HD Video Player in the future. On November 30, 2012, Red Digital Cinema Camera Company announced that they were taking pre-orders for the US$1,450 REDRAY 4K Cinema Player, which can output 4K resolution to a single 4K display or to four 1080p displays arranged in any configuration via four HDMI1.4 connections. Video output can be DCI 4K (), 4K Ultra HD, 1080p, and 720p at frame rates of up to 60fps with a color depth of up to 12bpc with 4:2:2 chroma subsampling. Audio output can be up to 7.1 channels. Content is distributed online using the ODEMAX video service. External storage can be connected using eSATA, Ethernet, USB, or a Secure Digital memory card. 2013 On January 6, 2013, the NHK announced that Super Hi-Vision satellite broadcasts could begin in Japan in 2016. On January 7, 2013, Eutelsat announced the first dedicated 4K Ultra HD channel. Ateme uplinks the H.264/MPEG-4 AVC channel to the Eutelsat 10A satellite. The 4K Ultra HD channel has a frame rate of 50fps and is encoded at 40Mbit/s. The channel started transmission on January 8, 2013. On the same day Qualcomm CEO Paul Jacobs announced that mobile devices capable of playing and recording 4K Ultra HD video would be released in 2013 using the Snapdragon 800 chip. On January 8, 2013, Broadcom announced the BCM7445, an Ultra HD decoding chip capable of decoding High Efficiency Video Coding (HEVC) at up to at 60fps. The BCM7445 is a 28nm ARM architecture chip capable of 21,000 Dhrystone MIPS with volume production estimated for the middle of 2014. On the same day THX announced the "THX 4K Certification" program for Ultra HD displays. The certification involves up to 600 tests and the goal of the program is so that "content viewed on a THX Certified Ultra HD display meets the most exacting video standards achievable in a consumer television today". On January 14, 2013, Blu-ray Disc Association president Andy Parsons stated that a task force created three months ago is studying an extension to the Blu-ray Disc specification that would add support for 4K Ultra HD video. On January 25, 2013, the BBC announced that the BBC Natural History Unit would produce Survival—the first wildlife TV series recorded in 4K resolution. This was announced after the BBC had experimented with 8K during the London Olympics. On January 27, 2013, Asahi Shimbun reported that 4K Ultra HD satellite broadcasts would start in Japan with the 2014 FIFA World Cup. Japan's Ministry of Internal Affairs and Communications decided on this move to stimulate demand for 4K Ultra HD TVs. On February 21, 2013, Sony announced that the PlayStation 4 would support 4K resolution output for photos and videos but wouldn't render games at that resolution. On March 26, 2013, the Advanced Television Systems Committee (ATSC) announced a call for proposals for the ATSC 3.0 physical layer that specifies support for resolution at 60fps. On April 11, 2013, Bulb TV created by Canadian entrepreneur Evan Kosiner became the first broadcaster to provide a 4K linear channel and VOD content to cable and satellite companies in North America. The channel is licensed by the Canadian Radio-Television and Telecommunications Commission to provide educational content. On April 19, 2013, SES announced the first Ultra HD transmission using the HEVC standard. The transmission had a resolution of and a bit rate of 20Mbit/s. On May 9, 2013, NHK and Mitsubishi Electric announced that they had jointly developed the first HEVC encoder for 8K Ultra HD TV, which is also called Super Hi-Vision (SHV). The HEVC encoder supports the Main 10 profile at Level 6.1 allowing it to encode 10bpc video with a resolution of at 60fps. The HEVC encoder has 17 3G-SDI inputs and uses 17 boards for parallel processing with each board encoding a row of pixels to allow for real time video encoding. The HEVC encoder is compliant with draft 4 of the HEVC standard and has a maximum bit rate of 340Mbit/s. The HEVC encoder was shown at the NHK Science & Technology Research Laboratories Open House 2013 that took place from May 30 to June 2. At the NHK Open House 2013 the HEVC encoder used a bit rate of 85Mbit/s, which gives a compression ratio of . On May 21, 2013, Microsoft announced the Xbox One, which supports 4K resolution () video output and 7.1 surround sound. Yusuf Mehdi, corporate vice president of marketing and strategy for Microsoft, has stated that there is no hardware restriction that would prevent Xbox One games from running at 4K resolution. On May 30, 2013, Eye IO announced that their encoding technology was licensed by Sony Pictures Entertainment to deliver 4K Ultra HD video. Eye IO encodes their video assets at and includes support for the xvYCC color space. In mid-2013, a Chinese television manufacturer produced the first 50-inch UHD television set costing less than $1,000. On June 11, 2013, Comcast announced that they had demonstrated the first public U.S.-based delivery of 4K Ultra HD video at the 2013 NCTA show. The demonstration included segments from Oblivion, Defiance, and nature content sent over a DOCSIS 3.0 network. On June 13, 2013, ESPN announced that they would end the broadcast of the ESPN 3D channel by the end of that year and would "...experiment with things like UHDTV." On June 26, 2013, Sharp announced the LC-70UD1U, which is a 4K Ultra HD TV. The LC-70UD1U is the world's first TV with THX 4K certification. On July 2, 2013, Jimmy Kimmel Live! recorded in 4K Ultra HD a performance by musical guest Karmin, and the video clip was used as demonstration material at Sony stores. On July 3, 2013, Sony announced the release of their 4K Ultra HD Media Player with a price of US$7.99 for rentals and US$29.99 for purchases. The 4K Ultra HD Media Player only worked with Sony's 4K Ultra HD TVs. On July 15, 2013, the CTA published CTA-861-F, a standard that applies to interfaces such as DVI, HDMI, and LVDS. The CTA-861-F standard adds support for several Ultra HD video formats and additional color spaces. On September 2, 2013 Acer announced the first smartphone, dubbed Liquid S2, capable of recording 4K video. On September 4, 2013, the HDMI Forum released the HDMI 2.0 specification, which supports 4K resolution at 60fps. On the same day, Panasonic announced the Panasonic TC-L65WT600—the first 4K TV to support 4K resolution at 60FPS. The Panasonic TC-L65WT600 has a screen, support for DisplayPort1.2a, support for HDMI2.0, an expected ship date of October, and a suggested retail price of US$5,999. On September 12–17, 2013, at the 2013 IBC Conference in Amsterdam, Nagra introduced an Ultra HD User Interface called Project Ultra based on HTML5, which works with OpenTV 5. On October 4, 2013, DigitalEurope announced the requirements for their UHD logo in Europe. The DigitalEurope UHD logo requires that the display support a resolution of at least , a aspect ratio, the Rec. 709 (HDTV) color space, 8bpc color depth, a frame rate of 24, 25, 30, 50, or 60fps, and at least 2-channel audio. On October 29, 2013, Elemental Technologies announced support for real-time 4K Ultra HD HEVC video processing. Elemental provided live video streaming of the 2013 Osaka Marathon on October 27, 2013, in a workflow designed by K-Opticom, a telecommunications operator in Japan. Live coverage of the race in 4K Ultra HD was available to viewers at the International Exhibition Center in Osaka. This transmission of 4K Ultra HD HEVC video in real-time was an industry-first. On November 28, 2013, Organizing Committee of the XXII Olympic Winter Games and XI Paralympic Winter Games 2014 in Sochi chief Dmitri Chernyshenko stated that the 2014 Olympic Winter Games would be shot in 8K Super Hi-Vision. On December 25, 2013, YouTube added a "2160p 4K" option to its videoplayer. Previously, a visitor had to select the "original" setting in the video quality menu to watch a video in 4K resolution. With the new setting, YouTube users can much more easily identify and play 4K videos. On December 30, 2013, Samsung announced availability of its Ultra HDTV for custom orders, making this the world's largest Ultra HDTV so far. 2014 On January 22, 2014, European Southern Observatory became the first scientific organization to deliver Ultra HD footage at regular intervals. On May 6, 2014, France announced DVB-T2 tests in Paris for Ultra HD HEVC broadcast with objectives to replace by 2020 the current DVB-T MPEG4 HD national broadcast. On May 26, 2014, satellite operator Eutelsat announced the launch of Europe's first Ultra HD demo channel in HEVC, broadcasting at 50fps. The channel is available on the Hot Bird satellites and can be watched by viewers with 4K TVs equipped with DVB-S2 demodulators and HEVC decoders. In June 2014, the FIFA World Cup of that year (held in Brazil) became the first shot entirely in 4K Ultra HD, by Sony. The European Broadcasting Union (EBU) broadcast matches of the FIFA World Cup to audiences in North America, Latin America, Europe and Asia in Ultra HD via SES' NSS-7 and SES-6 satellites. Indian satellite TV provider unveils its plan to launch 4K UHD service early in 2015 and showcased live FIFA World Cup quarter final match in 4K UHD through Sony Entertainment Television Sony SIX. On June 24, 2014, the CEA updated the guidelines for Ultra High-Definition and released guidelines for Connected Ultra High-Definition, adding support for internet video delivered with HEVC. The CEA is developing a UHD logo for voluntary use by companies that make products that meet CEA guidelines. The CEA also clarified that "Ultra High-Definition", "Ultra HD", or "UHD" can be used with other modifiers and gave an example with "Ultra High-Definition TV 4K". On July 15, 2014, Researchers from the University of Essex both captured and delivered its graduation ceremonies in 4K UHDTV over the internet using H.264 in realtime. The 4K video stream was published at 8Mbit/s and 14Mbit/s for all its 11 ceremonies, with people viewing in from countries such as Cyprus, Bulgaria, Germany, Australia, UK, and others. On September 4, 2014, Canon Inc. announced that a firmware upgrade would add Rec. 2020 color space support to their EOS C500 and EOS C500 PL camera models and their DP-V3010 4K display. On September 4, 2014, Microsoft announced a firmware update for the Microsoft Lumia 1020, 930, Icon, and 1520 phones that adds 4K video recording. The update was later released by the individual phone carriers over the following weeks and months after the announcement. On September 5, 2014, the Blu-ray Disc Association announced that the 4K Blu-ray Disc specification supports 4K video at 60fps, High Efficiency Video Coding, the Rec. 2020 color space, high dynamic range, and 10bpc color depth. 4K Blu-ray Disc will have a data rate of at least 50Mbit/s and may include support for 66GB and 100GB discs. 4K Blu-ray Disc began licensing in 2015, with 4K Blu-ray Disc players released late that year. On September 5, 2014, DigitalEurope released an Ultra HD logo for companies that meet their technical requirements. On September 11, 2014, satellite operator SES announced the first Ultra HD conditional access-protected broadcast using DVB standards at the IBC show in Amsterdam. The demonstration used a Samsung Ultra HD TV, with a standard Kudelski SmarDTV CI Plus conditional access module, to decrypt a full pixel CAS-protected Ultra HD signal in HEVC broadcast via an SES Astra satellite at 19.2°E. On November 19, 2014, rock band Linkin Park's concert at Berlin's O2 World Arena was broadcast live in Ultra HD via an Astra 19.2°E satellite. The broadcast was encoded in the UHD 4K standard with the HEVC codec (50fps and a 10bpc color depth), and was a joint enterprise of satellite owner SES, SES Platform Services (later MX1, now part of SES Video) and Samsung. 2015 Indian satellite pay TV provider Tata Sky launched UHD service and UHD Set Top Box on 9 January 2015. The service is 4K at 50fps and price of the UHD box is ₹5900 for existing SD/HD customers and ₹6400 for new customers. The 2015 Cricket World Cup was telecast live in 4K for free to those who own Tata Sky's UHD 4K STB. In May 2015, France Télévisions broadcast matches from Roland Garros live in Ultra HD via the EUTELSAT 5 West A satellite in the HEVC standard. The channel "France TV Sport Ultra HD" was available via the Fransat platform for viewers in France. In May 2015, satellite operator SES announced that Europe's first free-to-air Ultra HD channel (from Germany's pearl.tv shopping channel) would launch in September 2015, broadcast in native Ultra HD via the Astra 19.2°E satellite position. In June 2015, SES launched its first Ultra HD demonstration channel for cable operators and content distributors in North America to prepare their systems and test their networks for Ultra HD delivery. The channel is broadcast from the SES-3 satellite at 103°W. In June 2015, SPI International previewed its "4K FunBox UHD" Ultra HD channel on the HOT BIRD 4K1 channel, in advance of its commercial launch on Eutelsat's HOT BIRD satellites in the autumn. In July 2015, German HD satellite broadcaster HD+ and TV equipment manufacturer TechniSat announced an Ultra HD TV set with integrated decryption for reception of existing HD+ channels (available in the Autumn) and a new Ultra HD demonstration channel due to begin broadcasting in September. On 2 August 2015, The FA Community Shield in England was broadcast in Ultra HD by broadcast company BT Sport, becoming the first live football game shown in Ultra HD on the world's first commercial Ultra HD channel. The match was shown on Europe's first Ultra HD channel, BT Sport Ultra HD where selected live English Premier League and European Champions League matches were broadcast. Fashion One 4K launched on September 2, 2015 becoming the first global Ultra HD TV channel. Reaching nearly 370 million households across the world, the fashion, lifestyle and entertainment network broadcasts via satellite from Measat at 91.5°E (for Asia Pacific, Middle East, Australia) and from SES satellites Astra 19.2°E (for Europe), SES-3 at 103°W (for North America), NSS-806 at 47.5°W (for South America). In September 2015, Eutelsat presented new consumer research, conducted by TNS and GfK, on Ultra HD and screen sales in key TV markets. The study looked at consumer exposure to Ultra HD, perceived benefits and willingness to invest in equipment and content. GfK predicts a 200% increase in Ultra HD screen sales from June to December 2015, with sales expected to reach five million by the end of the year. GfK also forecasts that Ultra HD screens in 2020 will represent more than 70% of total sales across Europe and almost 60% in the Middle East and North Africa. On 2 September 2015, Sony unveiled the Xperia Z5 Premium; the first smartphone with a 4K display. On 9 September 2015, Apple Inc. announced that their new smartphone the iPhone 6S could record video in 4K. On 6 October 2015, Microsoft unveiled the latest version of their Microsoft Surface Book laptop with a display of "over 6 million pixels" and their new phones the Microsoft Lumia 950 and 950 XL, which, aside from 4K video recording that their predecessors included, feature a display of "over 5 million pixels". On 8 December 2015, the ceremony of the opening of the Holy Door in Vatican City, which marked the beginning of the Jubilee Year of Mercy in the Roman Catholic church, was the first worldwide Ultra HD broadcast via satellite. The event was produced by the Vatican Television Center with the support of Eutelsat, Sony, Globecast and DBW Communication. The team did some advanced experimentation with 4K/High Dynamic Range live images and in particular using technology developed by the BBC's R&D division and Japan's public broadcaster NHK in terms of Hybrid Log Gamma (HLG) signals. 2016 The "UHD Alliance", an industry consortium of content creators, distributors, and hardware manufacturers, announced Monday on January 11, 2016 during CES 2016 press conference its "Ultra HD Premium" specification, which defines resolution, bit depth, color gamut, high dynamic range (HDR) performance required for Ultra HD (UHDTV) content and displays to carry their Ultra HD Premium logo. On April 2, 2016, Ultra-high-definition television demo channel UHD1 broadcast the Le Corsaire ballet in Ultra HD live from the Vienna State Opera. The programme was produced by Astra satellite owner, SES in collaboration with European culture channel ARTE, and transmitted free-to-air, available to anyone with reception of the Astra 19.2°E satellites and an ultra HD screen equipped with an HEVC decoder. As of April 2016, The NPD Group reported that 6 million 4K UHD televisions had been sold. In May 2016, Modern Times Group, owner of the Viasat DTH platform announced the launch of Viasat Ultra HD, the first UHD channel for the Nordic region. The channel features selected live sport events especially produced in Ultra HD and launch in the autumn via the SES-5 satellite at 5°E. Viasat is also launching an Ultra HD set-top box from Samsung and a TV-module that enables existing UHD TVs to display the channel. Satellite operator, SES said that the launch of Viasat Ultra HD brings the number of UHD channels (including test channels and regional versions) carried on SES satellites to 24, or 46% of all UHD channels broadcast via satellite worldwide. In August 2016, Sky announced that 4K broadcasts would begin via their new Sky Q 2TB box. The opening match of the 2016–17 Premier League between Hull City and Leicester City on Sky Sports was the first 4K transmission. 2017 On 29 September 2017, BSAT-4a, dedicated for UHDTV programming and was also claimed "the world's first 8K satellite", was launched from the Guiana Space Centre aboard Ariane 5 rocket. BSAT-4a would be used for 2020 Summer Olympics held in Japan. Additionally, in September 2017, Kaleidescape, a manufacturer of home-theater movie players and servers made 4K UHD movies compatible with their movie store, and with their movie players. In December 2017, Qualcomm announced that their Snapdragon 845 chipset and Spectra 280 Image Signal Processor would be the first phone SoC to record video in UHD Premium. 2018 In April 2018, RTL started broadcasting its own UHD channel in Germany. First available at Astra 19.2°E, the Channel shows UHD productions, Formula 1, Football and Deutschland sucht den Superstar. Satellite operator SES broadcast an 8K television signal via its satellite system for the first time in May 2018. The 8K demonstration content, with a resolution of pixels, a frame rate of 60fps, and 10bpc color depth, was encoded in HEVC and transmitted at a rate of 80Mbit/s via the Astra 3B satellite during SES's Industry Days conference in Luxembourg. In June 2018, fuboTV broadcast the 2018 FIFA World Cup live in 4K and HDR10 becoming the first OTT streaming service to do so. Quarter, Semi and Final matches were available on many popular streaming devices including Apple TV, Chromecast Ultra, Fire TV, Roku and Android TVs. Content was streamed at 60 frames per second using HLS and DASH. Video was sent in fragmented MP4 containers delivering HEVC encoded video. On December 1, 2018, NHK launched BS8K, a broadcast channel transmitting at 8K resolution. 2019 In February 25, 2019 at the event of 2019 Mobile World Congress, Sony announced the Xperia 1, the first smartphone featuring a ultrawide 21:9 aspect ratio 4K HDR OLED display (with a resolution of 3840 × 1644), which would be released on May 30, 2019. In May 2019, for the first time in Europe, 8K demonstration content was received via satellite without the need for a separate external receiver or decoder. At the 2019 SES Industry Days conference at Betzdorf, Luxembourg broadcast quality 8K content (with a resolution of pixels at 50fps) was encoded using a Spin Digital HEVC encoder (at a bit rate of 70Mbit/s), uplinked to a single 33MHz transponder on SES' Astra 28.2°E satellites and the downlink received and displayed on a Samsung Q950RB production model TV. List of 4K television channels Global Fashion 4K Festival 4K High 4K TV Europe 4K Heritage 4K UltraHD FunBox 4K Universe Astra Promo beIN Sports 4K (Spain) Canal+ Canal+ 4K Ultra HD (Poland) Digi 4K (Romania) Digiturk UHD Discovery Eurosport 4K Fashion One 4K Fashion TV 4K Festival 4K Insight UHD M3.hu UHD (online only) M6 4K Movistar Fórmula 1 UHD Movistar Partidazo UHD NASA TV NPO 1 Pearl TV ProsiebenSat.1 UHD QVC Deutschland QVC Zwei Rai 4K RMC Sport 1 RTL UHD RTVS UltraHD SES Ultra HD Demo Channel SFR Sport 4K Sky Sport 4K (Italy) Sky UHD1 (UK) Sky UHD2 (UK) Sky Sport UHD (Germany) Sky Sport Bundesliga UHD (Germany) Sky Sports Main Event UK Sky Sports F1 UHD UK Sportklub 4K Sport TV 4K UHD Travelxp Tricolor Ultra HD TF1 4K TNT Sports Ultimate TRT 4K TVE La 1 UHD (Spain) TVP 4K (Poland) UHD-1 V Sport Ultra HD Virgin TV Ultra HD Wow! 4K Africa BTV (Botswana) EBS 4K (Ethiopia) Nahoo sports+ UHD (Ethiopia) Nahoo sports+2 UHD (Ethiopia) ETV sports UHD (Ethiopia) Kana TV 4K (Ethiopia) on Time sports HD (Egypt) Americas NASA TV UHD Sportsnet 4K and Sportsnet One 4K (Canada) TSN 4K and TSN 2 4K (Canada) Hispasat TV 4K (Latin America) Fashion One 4K Fox Sports 4K and Fox Sports 1 4K (USA) DirecTV 4K and DirecTV Cinema 4K (USA) ESPN (USA) 4KUNIVERSE Insight UHD The Country Network SporTV 4K (Brazil) UHD-1 Asia CCTV 4K () CCTV 16 Olympic 4K () Guangdong Radio and Television 4K Variety Channel () Guangzhou Television Ultra HD () SiTV Joy UHD () Wasu-Discovery UHD () Beijing TV Winter Olympic & Documentaries UHD () First Media 4K (Indonesia) IndiHome 4K (Indonesia) Cable 4K (South Korea) KBS1 UHD (South Korea) KBS2 UHD (South Korea) MBC UHD (South Korea) SBS UHD (South Korea) KNN UHD (South Korea) KBC UHD (South Korea) TBC UHD (South Korea) TJB UHD (South Korea) UBC UHD (South Korea) G1 UHD (South Korea) Asia UHD (South Korea) Insight UHD Life U (South Korea) SBS F!L UHD (South Korea) IRIB UHD (Iran) Sky UHD UHD Dream TV UMAX (South Korea) UXN 4K-Sat Tata Play 4K (India) Now Sports 4K (Hong Kong) TVB Jade UHD () Bol Network (Pakistan) Hum News (Pakistan) Kan11 4K (Israel) NHK BS 4K (Japan) BS Nittele 4K (Japan) BS Asahi 4K (Japan) BS TV Tokyo 4K (Japan) BS-TBS 4K (Japan) BS Fuji 4K (Japan) SHOP CHANNEL 4K (Japan) 4K QVC (Japan) THE CINEMA 4K (Japan) J Sports (Japan) Star Channel 4K (Japan) Sukachan 4K (Japan) Japanese movie + Jidaigeki 4K (Japan) Wowow 4K (Japan) Astro Super Sport UHD (Malaysia) True 4K (Thailand) VTVcab 4K (Vietnam) SCTV 4K (Vietnam) Oceania Foxtel Movies Ultra HD (Australia) Fox Sports Ultra HD (Australia) List of 8K television channels NHK BS8K (Japan) CCTV-8K (China) Field trials of UHDTV over DTT networks Field trials using existing digital terrestrial television (DTT) transmitters have included the following. Status of standardization of UHDTV Standards that deal with UHDTV include: Standardization in ITU-R Standards approved in ITU-R: Rec. ITU-R BT.1201-1 (2004) Rec. ITU-R BT.1769 (2006) Rec. ITU-R BT.2020 (2012, revised 2014) Rec. ITU-R BT.2035-0 (07/13) A reference viewing environment for evaluation of HDTV program material or completed programmes Rec. ITU-R BS.2051-0 (02/14) Advanced sound system for programme production Rec. ITU-R BT.2100 (2016) Other documents prepared or being prepared by ITU-R: Report ITU-R BT.2246-3 (2014) The present state of ultra-high definition television Draft New Report ITU-R BT.[UHDTV-DTT TRIALS] (Sub-Working Group 6A-1) Collection of field trials of UHDTV over DTT networks Standardization in ITU-T and MPEG Standards developed in ITU-T's VCEG and ISO/IEC JTC 1's MPEG that support Ultra-HD include: H.265/MPEG-H HEVC High Efficiency Video Coding (2013, revised 2014) H.264/MPEG-4 AVC Advanced Video Coding (support for Ultra-HD added circa 2013) Standardization in SMPTE SMPTE 2036-1 (2009) SMPTE 2036-2 (2008) SMPTE 2036-3 (2010) Standardization for Europe DVB approved the Standard TS 101 154 V2.1.1, published (07/2014) in the DVB Blue Book A157 Specification for the use of Video and Audio Coding in Broadcasting Applications based on the MPEG-2 Transport Stream, which was published by ETSI in the following months. Standardization for Japan and South Korea Standards for UHDTV in South Korea have been developed by its Telecommunications Technology Association. On August 30, 2013, the scenarios for 4K-UHDTV service were described in the Report "TTAR 07.0011: A Study on the UHDTV Service Scenarios and its Considerations". On May 22, 2014, the technical report "TTAR-07.0013: Terrestrial 4K UHDTV Broadcasting Service" was published. On October 13, 2014, an interim standard – "TTAI.KO-07.0123: Transmission and Reception for Terrestrial UHDTV Broadcasting Service" – was published based on HEVC encoding, with MPEG 2 TS, and DVB-T2 serving as the standards. On June 24, 2016, a standard – "TTAK.KO-07.0127: Transmission and Reception for Terrestrial UHDTV Broadcasting Service" – was published based on HEVC encoding, with MMTP/ROUTE IP, and ATSC 3.0 serving as the standards. See also Rec. 2020 – ITU-R Recommendation for UHDTV 4K resolution – Resolutions of common 4K formats and list of 4K-monitors, TVs, projectors 8K resolution – Specifications for ~8x4K UHD and 8Kx8K fulldome Ultra HD Blu-ray – 2160p / 4K (3840 × 2160 resolution) format Blu-ray Disc as specified by Blu-ray Disc Association IMAX – A film theater format that historically has been innovative in creating a more realistic viewing experience High Efficiency Video Coding (HEVC) VP9 / WebM 22.2 surround sound – The audio component of Super Hi-Vision Notes References External links 4K TV Mag – an online magazine about Ultra HD TV "What is 4K Ultra HD?" – 4K TV Mag Ultra-high-definition television Consumer electronics Digital television Telecommunications-related introductions in 2003 Television technology Television terminology Video formats
Ultra-high-definition television
Technology
9,336
76,393,782
https://en.wikipedia.org/wiki/Terbium%28III%29%20iodate
Terbium(III) iodate is an inorganic compound with the chemical formula Tb(IO3)3. It can be obtained by the reaction of terbium(III) periodate and periodic acid in water at 160 °C, or by the hydrothermal reaction of terbium(III) nitrate or terbium(III) chloride and iodic acid at 200 °C . It crystallizes in the monoclinic crystal system, with space group P21/c and unit cell parameters a=7.102, b=8.468, c=13.355 Å, β=99.67°. References Terbium compounds Iodates
Terbium(III) iodate
Chemistry
138
36,065,631
https://en.wikipedia.org/wiki/Mohamed%20Salmane
Mohamed Salmane (born 1958) is a Tunisian politician. He serves as the Minister of Housing and Equipment under Prime Minister Hamadi Jebali. Biography Mohamed Salmane was born on 9 October 1958 in Nabeul. He received a Bachelor of Science degree in Civil engineering. He has been the General Director of the Tunisian-Libyan Investment Office and the Chief Executive Officer of Tunisie Autoroutes. On 20 December 2011, after former President Zine El Abidine Ben Ali was deposed, he joined the Jebali Cabinet as Minister of Housing and Equipment. He is married and has three children. References Living people 1958 births Civil engineers Housing ministers of Tunisia Environment ministers of Tunisia Tunisian engineers
Mohamed Salmane
Engineering
141
1,160
https://en.wikipedia.org/wiki/Automorphism
In mathematics, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object. Definition In an algebraic structure such as a group, a ring, or vector space, an automorphism is simply a bijective homomorphism of an object into itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example, group homomorphism, ring homomorphism, and linear operator.) More generally, for an object in some category, an automorphism is a morphism of the object to itself that has an inverse morphism; that is, a morphism is an automorphism if there is a morphism such that where is the identity morphism of . For algebraic structures, the two definitions are equivalent; in this case, the identity morphism is simply the identity function, and is often called the trivial automorphism. Automorphism group The automorphisms of an object form a group under composition of morphisms, which is called the automorphism group of . This results straightforwardly from the definition of a category. The automorphism group of an object in a category is often denoted , or simply Aut(X) if the category is clear from context. Examples In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X. In elementary arithmetic, the set of integers, Z, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field. A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group. In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V). (The algebraic structure of all endomorphisms of V is itself an algebra over the same base field as V, whose invertible elements precisely consist of GL(V).) A field automorphism is a bijective ring homomorphism from a field to itself. The field of the rational numbers has no other automorphism than the identity, since an automorphism must fix the additive identity and the multiplicative identity ; the sum of a finite number of must be fixed, as well as the additive inverses of these sums (that is, the automorphism fixes all integers); finally, since every rational number is the quotient of two integers, all rational numbers must be fixed by any automorphism. The field of the real numbers has no automorphisms other than the identity. Indeed, the rational numbers must be fixed by every automorphism, per above; an automorphism must preserve inequalities since is equivalent to and the latter property is preserved by every automorphism; finally every real number must be fixed since it is the least upper bound of a sequence of rational numbers. The field of the complex numbers has a unique nontrivial automorphism that fixes the real numbers. It is the complex conjugation, which maps to The axiom of choice implies the existence of uncountably many automorphisms that do not fix the real numbers. The study of automorphisms of algebraic field extensions is the starting point and the main object of Galois theory. The automorphism group of the quaternions (H) as a ring are the inner automorphisms, by the Skolem–Noether theorem: maps of the form . This group is isomorphic to SO(3), the group of rotations in 3-dimensional space. The automorphism group of the octonions (O) is the exceptional Lie group G2. In graph theory an automorphism of a graph is a permutation of the nodes that preserves edges and non-edges. In particular, if two nodes are joined by an edge, so are their images under the permutation. In geometry, an automorphism may be called a motion of the space. Specialized terminology is also used: In metric geometry an automorphism is a self-isometry. The automorphism group is also called the isometry group. In the category of Riemann surfaces, an automorphism is a biholomorphic map (also called a conformal map), from a surface to itself. For example, the automorphisms of the Riemann sphere are Möbius transformations. An automorphism of a differentiable manifold M is a diffeomorphism from M to itself. The automorphism group is sometimes denoted Diff(M). In topology, morphisms between topological spaces are called continuous maps, and an automorphism of a topological space is a homeomorphism of the space to itself, or self-homeomorphism (see homeomorphism group). In this example it is not sufficient for a morphism to be bijective to be an isomorphism. History One of the earliest group automorphisms (automorphism of a group, not simply a group of automorphisms of points) was given by the Irish mathematician William Rowan Hamilton in 1856, in his icosian calculus, where he discovered an order two automorphism, writing: so that is a new fifth root of unity, connected with the former fifth root by relations of perfect reciprocity. Inner and outer automorphisms In some categories—notably groups, rings, and Lie algebras—it is possible to separate automorphisms into two types, called "inner" and "outer" automorphisms. In the case of groups, the inner automorphisms are the conjugations by the elements of the group itself. For each element a of a group G, conjugation by a is the operation given by (or a−1ga; usage varies). One can easily check that conjugation by a is a group automorphism. The inner automorphisms form a normal subgroup of Aut(G), denoted by Inn(G); this is called Goursat's lemma. The other automorphisms are called outer automorphisms. The quotient group is usually denoted by Out(G); the non-trivial elements are the cosets that contain the outer automorphisms. The same definition holds in any unital ring or algebra where a is any invertible element. For Lie algebras the definition is slightly different. See also Antiautomorphism Automorphism (in Sudoku puzzles) Characteristic subgroup Endomorphism ring Frobenius automorphism Morphism Order automorphism (in order theory). Relation-preserving automorphism Fractional Fourier transform References External links Automorphism at Encyclopaedia of Mathematics Morphisms Abstract algebra Symmetry
Automorphism
Physics,Mathematics
1,582
10,595,872
https://en.wikipedia.org/wiki/Illegal%20number
An illegal number is a number that represents information which is illegal to possess, utter, propagate, or otherwise transmit in some legal jurisdiction. Any piece of digital information is representable as a number; consequently, if communicating a specific set of information is illegal in some way, then the number may be illegal as well. Background A number may represent some type of classified information or trade secret, legal to possess only by certain authorized persons. An AACS encryption key (09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0) that came to prominence in May 2007 is an example of a number claimed to be a secret, and whose publication or inappropriate possession is claimed to be illegal in the United States. It allegedly assists in the decryption of any HD DVD or Blu-ray Disc released before this date. The issuers of a series of cease-and-desist letters claim that the key itself is therefore a copyright circumvention device, and that publishing the key violates Title 1 of the US Digital Millennium Copyright Act. In part of the DeCSS court order and in the AACS legal notices, the claimed protection for these numbers is based on their mere possession and the value or potential use of the numbers. This makes their status and legal issues surrounding their distribution quite distinct from that of copyright infringement. Any image file or an executable program can be regarded as simply a very large binary number. In certain jurisdictions, there are images that are illegal to possess, due to obscenity or secrecy/classified status, so the corresponding numbers could be illegal. In 2011, Sony sued George Hotz and members of fail0verflow for jailbreaking the PlayStation 3. Part of the lawsuit complaint was that they had published PS3 keys. Sony also threatened to sue anyone who distributed the keys. Sony later accidentally retweeted an older dongle key through its fictional Kevin Butler character. Flags and steganography As a protest of the DeCSS case, many people created "steganographic" versions of the illegal information (i.e. hiding them in some form in flags etc.). Dave Touretzky of Carnegie Mellon University created a "Gallery of DeCSS descramblers". In the AACS encryption key controversy, a "free speech flag" was created. Some illegal numbers are so short that a simple flag (shown in the image) could be created by using triples of components as describing red-green-blue colors. The argument is that if short numbers can be made illegal, then any representation of those numbers also becomes illegal, like simple patterns of colors, etc. In the Sony Computer Entertainment v. Hotz case, many bloggers (including one at Yale Law School) made a "new free speech flag" in homage to the AACS free speech flag. Most of these were based on the "dongle key" rather than the keys Hotz actually released. Several users of other websites posted similar flags. Illegal primes An illegal prime is an illegal number which is also prime. One of the earliest illegal prime numbers was generated in March 2001 by Phil Carmody. Its binary representation corresponds to a compressed version of the C source code of a computer program implementing the DeCSS decryption algorithm, which can be used by a computer to circumvent a DVD's copy protection. Protests against the indictment of DeCSS author Jon Lech Johansen and legislation prohibiting publication of DeCSS code took many forms. One of them was the representation of the illegal code in a form that had an intrinsically archivable quality. Since the bits making up a computer program also represent a number, the plan was for the number to have some special property that would make it archivable and publishable (one method was to print it on a T-shirt). The primality of a number is a fundamental property of number theory and is therefore not dependent on legal definitions of any particular jurisdiction. The large prime database of the PrimePages website records the top 20 primes of various special forms; one of them is proof of primality using the elliptic curve primality proving (ECPP) algorithm. Thus, if the number were large enough and proved prime using ECPP, it would be published. Other examples There are other contexts in which smaller numbers have run afoul of laws or regulations, or drawn the attention of authorities. In 2012, it was reported that the numbers 89, 6, and 4 each became banned search terms on search engines in China, because of the date (1989-06-04) of the June Fourth Massacre in Tiananmen Square. In 2012, the school district in Greeley, Colorado banned the wearing of jerseys that bore the numbers 18, 14, or 13 (or the reverse: 81, 41, or 31) due to affiliations with the 18th Street, Norteños, and Sureños gangs, respectively. In 2017, far-right Slovak politician Marian Kotleba was criminally charged for donating to a charity. The number is a reference to a white supremacist slogan and the Nazi salute. See also HDCP master key release Texas Instruments signing key controversy Normal number Infinite monkey theorem The Library of Babel Prior art Streisand effect Numerology References External links Cryptography law Numbers Trade secrets File sharing Numerals
Illegal number
Mathematics
1,097
2,469,940
https://en.wikipedia.org/wiki/Keith%20Medal
The Keith Medal was a prize awarded by the Royal Society of Edinburgh, Scotland's national academy, for a scientific paper published in the society's scientific journals, preference being given to a paper containing a discovery, either in mathematics or earth sciences. The Medal was inaugurated in 1827 as a result of a gift from Alexander Keith of Dunnottar, the first Treasurer of the Society. It was awarded quadrennially, alternately for a paper published in: Proceedings A (Mathematics) or Transactions (Earth and Environmental Sciences). The medal bears the head of John Napier of Merchiston. The medal is no longer awarded. Recipients of the Keith Gold Medal Source (1827 to 1913): Proceedings of the Royal Society of Edinburgh 19th century 1827–29: David Brewster, on his Discovery of Two New Immiscible Fluids in the Cavities of certain Minerals 1829–31: David Brewster, on a New Analysis of Solar Light 1831–33: Thomas Graham, on the Law of the Diffusion of Gases 1833–35: James David Forbes, on the Refraction and Polarization of Heat 1835–37: John Scott Russell, on Hydrodynamics 1837–39: John Shaw, on the Development and Growth of the Salmon 1839–41: Not awarded 1841–43: James David Forbes, on Glaciers 1843–45: Not awarded 1845–47: Sir Thomas Brisbane, for the Makerstoun Observations on Magnetic Phenomena 1847–49: Not awarded 1849–51: Philip Kelland, on General Differentiation, including his more recent Communication on a process of the Differential Calculus, and its application to the solution of certain Differential Equations 1851–53: William John Macquorn Rankine, on the Mechanical Action of Heat 1853–55: Thomas Anderson, on the Crystalline Constituents of Opium, and on the Products of the Destructive Distillation of Animal Substances 1855–57: George Boole, on the Application of the Theory of Probabilities to Questions of the Combination of Testimonies and Judgments 1857–59: Not awarded 1859–61: John Allan Broun, on the Horizontal Force of the Earth’s Magnetism, on the Correction of the Bifilar Magnetometer, and on Terrestrial Magnetism generally 1861–63: William Thomson, on some Kinematical and Dynamical Theorems 1863–65: James David Forbes, for Experimental Inquiry into the Laws of Conduction of Heat in Iron Bars 1865–67: Charles Piazzi Smyth, on Recent Measures at the Great Pyramid 1867–69: Peter Guthrie Tait, on the Rotation of a Rigid Body about a Fixed Point 1869–71: James Clerk Maxwell, on Figures, Frames, and Diagrams of Forces 1871–73: Peter Guthrie Tait, First Approximation to a Thermo-electric Diagram 1873–75: Alexander Crum Brown, on the Sense of Rotation, and on the Anatomical Relations of the Semicircular Canals of the Internal Ear 1875–77: Matthew Forster Heddle, on the Rhombohedral Carbonates and on the Felspars of Scotland 1877–79: Henry Charles Fleeming Jenkin, on the Application of Graphic Methods to the Determination of the Efficiency of Machinery 1879–81: George Chrystal, on the Differential Telephone 1881–83: Sir Thomas Muir, Researches into the Theory of Determinants and Continued Fractions 1883–85: John Aitken, on the Formation of Small Clear Spaces in Dusty Air 1885–87: John Young Buchanan, for a series of communications, extending over several years, on subjects connected with Ocean Circulation, Compressibility of Glass, etc. 1887–89: Edmund Albert Letts, for his papers on the Organic Compounds of Phosphorus 1889–91: Robert Traill Omond, for his contributions to Meteorological Science 1891–93: Sir Thomas Richard Fraser, for his papers on Strophanthus hispidus, Strophanthin, and Strophanthidin 1893–95: Cargill Gilston Knott, for his papers on the Strains produced by Magnetism in Iron and in Nickel 1895–97: Sir Thomas Muir, for his continued communications on Determinants and Allied Questions 1897–99: James Burgess, on the Definite Integral ... 20th/21st century See also List of mathematics awards References External links Awards of Keith Prize 1827-1890 List of recent winners Announcement of Jenkin's award British science and technology awards Mathematics awards Royal Society of Edinburgh Scottish awards 1827 establishments in Scotland Awards established in 1827
Keith Medal
Technology
905
32,352,914
https://en.wikipedia.org/wiki/Lactarius%20crocatus
Lactarius crocatus is a member of the large milk-cap genus Lactarius in the order Russulales. Found in Chiang Mai Province (northern Thailand), it was described as new to science in 2010. See also List of Lactarius species References External links crocatus Fungi described in 2010 Fungi of Asia Fungus species
Lactarius crocatus
Biology
71
66,480
https://en.wikipedia.org/wiki/AstroTurf
AstroTurf is an American subsidiary of SportGroup that produces artificial turf for playing surfaces in sports. The original AstroTurf product was a short-pile synthetic turf invented in 1965 by Monsanto. Since the early 2000s, AstroTurf has marketed taller pile systems that use infill materials to better replicate natural turf. In 2016, AstroTurf became a subsidiary of German-based SportGroup, a family of sports surfacing companies, which itself is owned by the investment firm Equistone Partners Europe. History The original AstroTurf brand product was invented by James M. Faria and Robert T. Wright at Monsanto. The original, experimental installation was inside the Waughhtel-Howe Field House at the Moses Brown School in Providence, Rhode Island, in 1964. It was patented in 1965 and originally sold under the name "ChemGrass." It was rebranded as AstroTurf by company employee John A. Wortmann after its first well-publicized use at the Houston Astrodome stadium in 1966. Donald L. Elbert patented two methods to improve the product in 1971. Early iterations of the short-pile turf swept many major stadiums, but the product did need improvement. Concerns over directionality and traction led Monsanto's R&D department to implement a texturized nylon system. By imparting a crimped texture to the nylon after it was extruded, the product became highly uniform. In 1987, Monsanto consolidated its AstroTurf management, marketing, and technical activities in Dalton, Georgia, as AstroTurf Industries, Inc. In 1988, Balsam AG purchased all the capital stock of AstroTurf Industries, Inc. In 1994, Southwest Recreational Industries, Inc. (SRI) acquired the AstroTurf brand. In 1996, SRI was acquired by American Sports Products Group Inc. While AstroTurf was the industry leader throughout the late 20th century, other companies emerged in the early 2000s. FieldTurf, AstroTurf's chief competitor since the early 2000s, marketed a product of tall-pile polyethylene turf with infill, meant to mimic natural grass more than the older products. This third-generation turf, as it became known, changed the landscape of the marketplace. Although SRI successfully marketed AstroPlay, a third-generation turf product, increased competition gave way to lawsuits. In 2000, SRI was awarded $1.5 million in a lawsuit after FieldTurf was deemed to have lied to the public by making false statements regarding its own product and making false claims about AstroTurf and AstroPlay products. Despite their legal victory, increased competition took its toll. In 2004, SRI declared bankruptcy. Out of the bankruptcy proceedings, Textile Management Associates, Inc. (TMA) of Dalton, Georgia, acquired the AstroTurf brand and other assets. TMA began marketing the AstroTurf brand under the company AstroTurf, LLC. In 2006, General Sports Venue (GSV) became TMA's marketing partner for the AstroTurf brand for the American market. AstroTurf, LLC handled the marketing of AstroTurf in the rest of the world. In 2009, TMA acquired GSV to enter the marketplace as a direct seller. AstroTurf, LLC focused its efforts on research and development, which has promoted rapid growth. AstroTurf introduced new product features and installation methods, including AstroFlect (a heat-reduction technology) and field prefabrication (indoor, climate-controlled inlaying). AstroTurf also introduced a product called "RootZone" consisting of crimped fibers designed to encapsulate infill. In 2016, SportGroup Holding announced that it would purchase AstroTurf, along with its associated manufacturing facilities. The AstroTurf brand has operated since then in North America as AstroTurf Corporation. In August 2021, AstroTurf became the official supplier of artificial turf to the United Soccer League, who run soccer leagues at the second, third, and fourth tiers of the U.S. men's soccer pyramid and the second tier of the U.S. women's soccer pyramid. 1960s 1964 The Moses Brown School in Providence, Rhode Island, installs ChemGrass. 1966 First major installation of AstroTurf (ChemGrass) at the Houston Astrodome indoor stadium for the Houston Astros. The infield portion was in place before opening day in April; the outfield was installed in early summer. 1967 AstroTurf is first installed in an outdoor stadium—Memorial Stadium at Indiana State University in Terre Haute. 1968 AstroTurf manufacturing facility opens in Dalton, Georgia. 1969 The backyard of The Brady Bunch house between the service porch and garage and under Tiger's kennel is covered with AstroTurf. According to script development notes, the installation firm hired by Mike Brady to lay the turf was owned by his college roommate, who had just started a landscaping business after returning from a combat tour in the Vietnam War with the 18th Engineer Brigade. In keeping with studio instructions, no direct mention of the war in Vietnam appeared in the script. The scene in which the installation takes place was ultimately cut, so never appeared in the series. 1970s 1970 The 1970 World Series is the first with games on AstroTurf (previously installed at Cincinnati's Riverfront Stadium), as the Reds play the Baltimore Orioles. 1971 The CFL's Hamilton Tiger-Cats install AstroTurf at their home stadium, Ivor Wynne Stadium, in preparation for hosting the Grey Cup game the following year. 1972 The Kansas City Chiefs home field of Arrowhead Stadium and the Kansas City Royals home field of Royals Stadium (now Kauffman Stadium) open in Kansas City, Missouri, with AstroTurf playing surfaces. 1973 The Buffalo Bills' home field of Rich Stadium (later Ralph Wilson Stadium, and then Highmark Stadium) opens in Orchard Park, New York, with an AstroTurf playing surface. 1974 The Miami Dolphins face the Minnesota Vikings on AstroTurf (the first Super Bowl played on the surface, but not the first to be played on artificial turf; that was Super Bowl V (in 1971) with Poly-Turf) in Super Bowl VIII – Rice Stadium, Houston, Texas. 1975 The first international field hockey game is played on AstroTurf at Molson Stadium, Montreal. 1980s 1980 The Philadelphia Phillies and Kansas City Royals play the entire 1980 World Series on AstroTurf in their ballparks. 1984 AstroTurf installs the first North American vertical drainage systems in Ewing, New Jersey, at Trenton State College (now known as The College of New Jersey). 1985 The St. Louis Cardinals and Kansas City Royals play the entire 1985 World Series on AstroTurf in their ballparks. 1987 The St. Louis Cardinals and Minnesota Twins play the entire 1987 World Series on AstroTurf in their ballparks. 1989 The first E-Layer system (Elastomeric) is installed at the College of William & Mary, as well as the University of California, Berkeley. 1990s 1993 The 1993 World Series, between the Philadelphia Phillies and Toronto Blue Jays, was the fourth World Series to be played entirely on artificial turf, following those in , , and . 1999 Real Madrid C.F. (Spain) becomes the first European football club to purchase an AstroTurf system for their practice fields. References External links Products introduced in 1964 Artificial turf Houston Astros
AstroTurf
Chemistry
1,508
63,517,158
https://en.wikipedia.org/wiki/Roborock
Roborock (also known as Beijing Roborock Technology Co. Ltd.; ) is a Chinese consumer goods company known for its robotic sweeping and mopping devices and handheld cordless stick vacuums. Xiaomi played a key role in the company's founding. History Beijing Roborock Technology Co. Ltd. was founded in 2014 in Beijing, China. Its launch was largely supported by Xiaomi. The company raised about $640 million in its February 2020 IPO, and the company had annual revenue of approximately CNY 4.5 billion as of August 2021. Roborock currently trades on Beijing's STAR market. Products Newer models in Roborock's "S" line of robotic floor cleaning devices have an obstacle avoidance system which uses dual cameras and a microprocessor to discern objects as small as 5 cm wide by 3 cm high. As the cleaners move about a space they create a schematic map, marking objects to be avoided later. Roborock has previously claimed that their floor cleaning devices do not store images or upload them to the cloud, and that all captured images are immediately deleted after processing. Roborock introduced ReactiveAI 2.0 with the release of the Roborock S7 MaxV. It has an RGB camera and 3D structured light scanning with a new neural processor for improved object recognition regardless of lighting conditions. In addition to their front-mounted cameras, newer Roborock floor cleaning devices use top-mounted LIDAR to map rooms. Using an app, users can set off-limits areas to ensure the device does not clean there. Users can also set "no-mop" areas where the device may vacuum but not mop. Roborock Q7 Max, released in 2022, generates 4,200 Pa suction, and can be controlled by Alexa, Siri, or Google Assistant. In 2023, Roborock released the S8, S8 Plus and S8 Pro Ultra. The main difference between the models is the docking station each includes. The S8 has a standard charging base whereas the S8 Plus includes an Auto-Empty Dock. The S8 Pro Ultra ships with the RockDock Ultra, the most advanced dock Roborock offers. In addition to emptying the S8's dustbin and charging the robot, the dock also manages the S8's mopping system including refilling its water and drying its mop pad. The S8 Pro Ultra is the first Roborock robot vacuum with lifting dual brushrolls. The S8 and S8 Plus have dual brushrolls but they do not lift. All models which precede the S8 have a single brushroll. Roborock S7 MaxV Ultra has 5,100Pa suction and a livestreaming camera. Roborock S7, which debuted at CES 2021, uses trademarked VibaRise. Roborock S7 can detect the type of floor to use either its mop or vacuum. The Roborock S6 MaxV operates at 67 dB and generates maximum suction of 2,500 Pa. Its dustbin measures 460 mL at full capacity. It can vacuum approximately 250 square meters between charges, and its mop can cover about 200 square meters of hard flooring on the same charge. The Roborock S4 does not mop. In 2022, Roborock released the Q5 which replaces the S models, and is similar to the S4 Max and the S5. The Q5 has a higher suction power but lacks the mop feature. References External links Xiaomi Companies based in Beijing Robotic vacuum cleaners Home automation Chinese companies established in 2014 Vacuum cleaner manufacturers Chinese brands
Roborock
Technology
757
3,285,684
https://en.wikipedia.org/wiki/Ancient%20DNA
Ancient DNA (aDNA) is DNA isolated from ancient sources (typically specimens, but also environmental DNA). Due to degradation processes (including cross-linking, deamination and fragmentation) ancient DNA is more degraded in comparison with contemporary genetic material. Genetic material has been recovered from paleo/archaeological and historical skeletal material, mummified tissues, archival collections of non-frozen medical specimens, preserved plant remains, ice and from permafrost cores, marine and lake sediments and excavation dirt. Even under the best preservation conditions, there is an upper boundary of 0.4–1.5 million years for a sample to contain sufficient DNA for sequencing technologies. The oldest DNA sequenced from physical specimens are from mammoth molars in Siberia over 1 million years old. In 2022, two-million year old genetic material was recovered from sediments in Greenland, and is currently considered the oldest DNA discovered so far. History of ancient DNA studies 1980s The first study of what would come to be called aDNA was conducted in 1984, when Russ Higuchi and colleagues at the University of California, Berkeley reported that traces of DNA from a museum specimen of the Quagga not only remained in the specimen over 150 years after the death of the individual, but could be extracted and sequenced. Over the next two years, through investigations into natural and artificially mummified specimens, Svante Pääbo confirmed that this phenomenon was not limited to relatively recent museum specimens but could apparently be replicated in a range of mummified human samples that dated as far back as several thousand years. The laborious processes that were required at that time to sequence such DNA (through bacterial cloning) were an effective brake on the study of ancient DNA (aDNA) and the field of museomics. However, with the development of the Polymerase Chain Reaction (PCR) in the late 1980s, the field began to progress rapidly. Double primer PCR amplification of aDNA (jumping-PCR) can produce highly skewed and non-authentic sequence artifacts. Multiple primer, nested PCR strategy was used to overcome those shortcomings. 1990s The post-PCR era heralded a wave of publications as numerous research groups claimed success in isolating aDNA. Soon a series of incredible findings had been published, claiming authentic DNA could be extracted from specimens that were millions of years old, into the realms of what Lindahl (1993b) has labelled Antediluvian DNA. The majority of such claims were based on the retrieval of DNA from organisms preserved in amber. Insects such as stingless bees, termites, and wood gnats, as well as plant and bacterial sequences were said to have been extracted from Dominican amber dating to the Oligocene epoch. Still older sources of Lebanese amber-encased weevils, dating to within the Cretaceous epoch, reportedly also yielded authentic DNA. Claims of DNA retrieval were not limited to amber. Reports of several sediment-preserved plant remains dating to the Miocene were published. Then in 1994, Woodward et al. reported what at the time was called the most exciting results to date — mitochondrial cytochrome b sequences that had apparently been extracted from dinosaur bones dating to more than 80 million years ago. When in 1995 two further studies reported dinosaur DNA sequences extracted from a Cretaceous egg, it seemed that the field would revolutionize knowledge of the Earth's evolutionary past. Even these extraordinary ages were topped by the claimed retrieval of 250-million-year-old halobacterial sequences from halite. The development of a better understanding of the kinetics of DNA preservation, the risks of sample contamination and other complicating factors led the field to view these results more skeptically. Numerous careful attempts failed to replicate many of the findings, and all of the decade's claims of multi-million year old aDNA would come to be dismissed as inauthentic. 2000s Single primer extension amplification was introduced in 2007 to address postmortem DNA modification damage. Since 2009 the field of aDNA studies has been revolutionized with the introduction of much cheaper research techniques. The use of high-throughput Next Generation Sequencing (NGS) techniques in the field of ancient DNA research has been essential for reconstructing the genomes of ancient or extinct organisms. A single-stranded DNA (ssDNA) library preparation method has sparked great interest among ancient DNA (aDNA) researchers. In addition to these technical innovations, the start of the decade saw the field begin to develop better standards and criteria for evaluating DNA results, as well as a better understanding of the potential pitfalls. 2020s Autumn of 2022, the Nobel Prize of Physiology or Medicine was awarded to Svante Pääbo "for his discoveries concerning the genomes of extinct hominins and human evolution". A few days later, on the 7th of December 2022, a study in Nature reported that two-million year old genetic material was found in Greenland, and is currently considered the oldest DNA discovered so far. Problems and errors Degradation processes Due to degradation processes (including cross-linking, deamination and fragmentation), ancient DNA is of lower quality than modern genetic material. The damage characteristics and ability of aDNA to survive through time restricts possible analyses and places an upper limit on the age of successful samples. There is a theoretical correlation between time and DNA degradation, although differences in environmental conditions complicate matters. Samples subjected to different conditions are unlikely to predictably align to a uniform age-degradation relationship. The environmental effects may even matter after excavation, as DNA decay-rates may increase, particularly under fluctuating storage conditions. Even under the best preservation conditions, there is an upper boundary of 0.4 to 1.5 million years for a sample to contain sufficient DNA for contemporary sequencing technologies. Research into the decay of mitochondrial and nuclear DNA in moa bones has modelled mitochondrial DNA degradation to an average length of 1 base pair after 6,830,000 years at −5 °C. The decay kinetics have been measured by accelerated aging experiments, further displaying the strong influence of storage temperature and humidity on DNA decay. Nuclear DNA degrades at least twice as fast as mtDNA. Early studies that reported recovery of much older DNA, for example from Cretaceous dinosaur remains, may have stemmed from contamination of the sample. Age limit A critical review of ancient DNA literature through the development of the field highlights that few studies have succeeded in amplifying DNA from remains older than several hundred thousand years. A greater appreciation for the risks of environmental contamination and studies on the chemical stability of DNA have raised concerns over previously reported results. The alleged dinosaur DNA was later revealed to be human Y-chromosome. The DNA reported from encapsulated halobacteria has been criticized based on its similarity to modern bacteria, which hints at contamination, or they may be the product of long-term, low-level metabolic activity. aDNA may contain a large number of postmortem mutations, increasing with time. Some regions of polynucleotide are more susceptible to this degradation, allowing erroneous sequence data to bypass statistical filters used to check the validity of data. Due to sequencing errors, great caution should be applied to interpretation of population size. Substitutions resulting from deamination of cytosine residues are vastly over-represented in the ancient DNA sequences. Miscoding of C to T and G to A accounts for the majority of errors. Contamination Another problem with ancient DNA samples is contamination by modern human DNA and by microbial DNA (most of which is also ancient). New methods have emerged in recent years to prevent possible contamination of aDNA samples, including conducting extractions under extreme sterile conditions, using special adapters to identify endogenous molecules of the sample (distinguished from those introduced during analysis), and applying bioinformatics to resulting sequences based on known reads in order to approximate rates of contamination. Authentication of aDNA Development in the aDNA field in the 2000s increased the importance of authenticating recovered DNA to confirm that it is indeed ancient and not the result of recent contamination. As DNA degrades over time, the nucleotides that make up the DNA may change, especially at the ends of the DNA molecules. The deamination of cytosine to uracil at the ends of DNA molecules has become a way of authentication. During DNA sequencing, the DNA polymerases will incorporate an adenine (A) across from the uracil (U), leading to cytosine (C) to thymine (T) substitutions in the aDNA data. These substitutions increase in frequency as the sample gets older. Frequency measurement of the C-T level, ancient DNA damage, can be made using various software such as mapDamage2.0 or PMDtools and interactively on metaDMG. Due to hydrolytic depurination, DNA fragments into smaller pieces, leading to single-stranded breaks. Combined with the damage pattern, this short fragment length can also help differentiate between modern and ancient DNA. Non-human aDNA Despite the problems associated with 'antediluvian' DNA, a wide and ever-increasing range of aDNA sequences have now been published from a range of animal and plant taxa. Tissues examined include artificially or naturally mummified animal remains, bone, shells, paleofaeces, alcohol preserved specimens, rodent middens, dried plant remains, and recently, extractions of animal and plant DNA directly from soil samples. In June 2013, a group of researchers including Eske Willerslev, Marcus Thomas Pius Gilbert and Orlando Ludovic of the Centre for Geogenetics, Natural History Museum of Denmark at the University of Copenhagen, announced that they had sequenced the DNA of a 560–780 thousand year old horse, using material extracted from a leg bone found buried in permafrost in Canada's Yukon territory. A German team also reported in 2013 the reconstructed mitochondrial genome of a bear, Ursus deningeri, more than 300,000 years old, proving that authentic ancient DNA can be preserved for hundreds of thousand years outside of permafrost. The DNA sequence of even older nuclear DNA was reported in 2021 from the permafrost-preserved teeth of two Siberian mammoths, both over a million years old. Researchers in 2016 measured chloroplast DNA in marine sediment cores, and found diatom DNA dating back to 1.4 million years. This DNA had a half-life significantly longer than previous research, of up to 15,000 years. Kirkpatrick's team also found that DNA only decayed along a half-life rate until about 100 thousand years, at which point it followed a slower, power-law decay rate. Human aDNA Due to the considerable anthropological, archaeological, and public interest directed toward human remains, they have received considerable attention from the DNA community. There are also more profound contamination issues, since the specimens belong to the same species as the researchers collecting and evaluating the samples. Sources Due to the morphological preservation in mummies, many studies from the 1990s and 2000s used mummified tissue as a source of ancient human DNA. Examples include both naturally preserved specimens, such as the Ötzi the Iceman frozen in a glacier and bodies preserved through rapid desiccation at high altitude in the Andes, as well as various chemically treated preserved tissue such as the mummies of ancient Egypt. However, mummified remains are a limited resource. The majority of human aDNA studies have focused on extracting DNA from two sources much more common in the archaeological record: bones and teeth. The bone that is most often used for DNA extraction is the petrous ear bone, since its dense structure provides good conditions for DNA preservation. Several other sources have also yielded DNA, including paleofaeces, and hair. Contamination remains a major problem when working on ancient human material. Ancient pathogen DNA has been successfully retrieved from samples dating to more than 5,000 years old in humans and as long as 17,000 years ago in other species. In addition to the usual sources of mummified tissue, bones and teeth, such studies have also examined a range of other tissue samples, including calcified pleura, tissue embedded in paraffin, and formalin-fixed tissue. Efficient computational tools have been developed for pathogen and microorganism aDNA analyses in a small (QIIME) and large scale (FALCON ). Results Taking preventative measures in their procedure against such contamination though, a 2012 study analyzed bone samples of a Neanderthal group in the El Sidrón cave, finding new insights on potential kinship and genetic diversity from the aDNA. In November 2015, scientists reported finding a 110,000-year-old tooth containing DNA from the Denisovan hominin, an extinct species of human in the genus Homo. The research has added new complexity to the peopling of Eurasia. A study from 2018 showed that a Bronze Age mass migration had greatly impacted the genetic makeup of the British Isles, bringing with it the Bell Beaker culture from mainland Europe. It has also revealed new information about links between the ancestors of Central Asians and the indigenous peoples of the Americas. In Africa, older DNA degrades quickly due to the warmer tropical climate, although, in September 2017, ancient DNA samples, as old as 8,100 years old, have been reported. Moreover, ancient DNA has helped researchers to estimate modern human divergence. By sequencing African genomes from three Stone Age hunter gatherers (2000 years old) and four Iron Age farmers (300 to 500 years old), Schlebusch and colleagues were able to push back the date of the earliest divergence between human populations to 350,000 to 260,000 years ago. As of 2021, the oldest completely reconstructed human genomes are ~45,000 years old. Such genetic data provides insights into the migration and genetic history – e.g. of Europe – including about interbreeding between archaic and modern humans like a common admixture between initial European modern humans and Neanderthals. Researchers specializing in ancient DNA Alan Cooper Kirsten Bos Joachim Burger M. Thomas P. Gilbert Johannes Krause Svante Pääbo Hendrik Poinar David Reich Beth Shapiro Mark G. Thomas Eske Willerslev Carles Lalueza-Fox See also Ancient pathogen genomics Ancient protein Archaeogenetics Environmental DNA (eDNA) List of DNA tested mummies List of haplogroups of historic people Molecular paleontology Paleogenetics Sedimentary ancient DNA (sedaDNA) References Further reading External links Famous mtDNA, isogg wiki Ancient mtDNA, isogg wiki Ancient DNA , y-str.org Evidence of the Past: A Map and Status of Ancient Remains – samples from USA no sequence data here. – no data on YDNA only mtDNA DNA Genetics techniques Methods in archaeology Genetic genealogy Ancient DNA (human)
Ancient DNA
Engineering,Biology
3,047
14,251,545
https://en.wikipedia.org/wiki/Coenzyme-B%20sulfoethylthiotransferase
In enzymology, coenzyme-B sulfoethylthiotransferase, also known as methyl-coenzyme M reductase (MCR) or most systematically as 2-(methylthio)ethanesulfonate:N-(7-thioheptanoyl)-3-O-phosphothreonine S-(2-sulfoethyl)thiotransferase is an enzyme that catalyzes the final step in the formation of methane. It does so by combining the hydrogen donor coenzyme B and the methyl donor coenzyme M. Via this enzyme, most of the natural gas on earth was produced. Ruminants (e.g. cows) produce methane because their rumens contain methanogenic prokaryotes (Archaea) that encode and express the set of genes of this enzymatic complex. The enzyme has two active sites, each occupied by the nickel-containing F430 cofactor. + CoM-S-S-CoB + methane The two substrates of this enzyme are 2-(methylthio)ethanesulfonate and N-(7-mercaptoheptanoyl)threonine 3-O-phosphate; its two products are CoM-S-S-CoB and methane. 3-Nitrooxypropanol inhibits the enzyme. In some species, the enzyme reacts in reverse (a process called reverse methanogenesis), catalysing the anaerobic oxidation of methane, therefore removing it from the environment. Such organisms are methanotrophs. This enzyme belongs to the family of transferases, specifically those transferring alkylthio groups. Structure Coenzyme-B sulfoethylthiotransferase is a multiprotein complex made up of a pair of identical halves. Each half is made up of three subunits: α, β and γ, also called McrA, McrB and McrG, respectively. References Further reading EC 2.8.4 Enzymes of unknown structure Enzymes Transferases Anaerobic digestion
Coenzyme-B sulfoethylthiotransferase
Chemistry,Engineering
442
24,467,180
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20North%20Macedonia
As a candidate country of the European Union, North Macedonia (MK) is included in the Nomenclature of Territorial Units for Statistics (NUTS). The three NUTS levels are: NUTS-1: MK0 North Macedonia NUTS-2: MK00 North Macedonia NUTS-3: 8 Statistical regions MK001 Vardarski MK002 Istočen MK003 Jugozapaden MK004 Jugoistočen MK005 Pelagoniski MK006 Pološki MK007 Severoistočen MK008 Skopski Below the NUTS levels, there are two LAU levels (LAU-1: municipalities; LAU-2: settlements). See also ISO 3166-2 codes of North Macedonia FIPS region codes of North Macedonia Sources Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe Overview map of CC (Candidate countries) - Statistical regions at level 1 Macedonia - Statistical regions at level 2 Macedonia - Statistical regions at level 3 Correspondence between the regional levels and the national administrative units Municipalities of Macedonia, Statoids.com Macedonia Subdivisions of North Macedonia
NUTS statistical regions of North Macedonia
Mathematics
223
75,883,882
https://en.wikipedia.org/wiki/Pseudomonas%20graminis
Pseudomonas graminis is a species of bacteria. References Pseudomonadales
Pseudomonas graminis
Biology
19
31,567,349
https://en.wikipedia.org/wiki/Widest%20path%20problem
In graph algorithms, the widest path problem is the problem of finding a path between two designated vertices in a weighted graph, maximizing the weight of the minimum-weight edge in the path. The widest path problem is also known as the maximum capacity path problem. It is possible to adapt most shortest path algorithms to compute widest paths, by modifying them to use the bottleneck distance instead of path length. However, in many cases even faster algorithms are possible. For instance, in a graph that represents connections between routers in the Internet, where the weight of an edge represents the bandwidth of a connection between two routers, the widest path problem is the problem of finding an end-to-end path between two Internet nodes that has the maximum possible bandwidth. The smallest edge weight on this path is known as the capacity or bandwidth of the path. As well as its applications in network routing, the widest path problem is also an important component of the Schulze method for deciding the winner of a multiway election, and has been applied to digital compositing, metabolic pathway analysis, and the computation of maximum flows. A closely related problem, the minimax path problem or bottleneck shortest path problem asks for the path that minimizes the maximum weight of any of its edges. It has applications that include transportation planning. Any algorithm for the widest path problem can be transformed into an algorithm for the minimax path problem, or vice versa, by reversing the sense of all the weight comparisons performed by the algorithm, or equivalently by replacing every edge weight by its negation. Undirected graphs In an undirected graph, a widest path may be found as the path between the two vertices in the maximum spanning tree of the graph, and a minimax path may be found as the path between the two vertices in the minimum spanning tree. It follows immediately from this equivalence that all pairs widest paths in an -vertex undirected graph can be computed in time . In any graph, directed or undirected, there is a straightforward algorithm for finding a widest path once the weight of its minimum-weight edge is known: simply delete all smaller edges and search for any path among the remaining edges using breadth-first search or depth-first search. Based on this test, there also exists a linear time algorithm for finding a widest path in an undirected graph, that does not use the maximum spanning tree. The main idea of the algorithm is to apply the linear-time path-finding algorithm to the median edge weight in the graph, and then either to delete all smaller edges or contract all larger edges according to whether a path does or does not exist, and recurse in the resulting smaller graph. use undirected bottleneck shortest paths in order to form composite aerial photographs that combine multiple images of overlapping areas. In the subproblem to which the widest path problem applies, two images have already been transformed into a common coordinate system; the remaining task is to select a seam, a curve that passes through the region of overlap and divides one of the two images from the other. Pixels on one side of the seam will be copied from one of the images, and pixels on the other side of the seam will be copied from the other image. Unlike other compositing methods that average pixels from both images, this produces a valid photographic image of every part of the region being photographed. They weigh the edges of a grid graph by a numeric estimate of how visually apparent a seam across that edge would be, and find a bottleneck shortest path for these weights. Using this path as the seam, rather than a more conventional shortest path, causes their system to find a seam that is difficult to discern at all of its points, rather than allowing it to trade off greater visibility in one part of the image for lesser visibility elsewhere. A solution to the minimax path problem between the two opposite corners of a grid graph can be used to find the weak Fréchet distance between two polygonal chains. Here, each grid graph vertex represents a pair of line segments, one from each chain, and the weight of an edge represents the Fréchet distance needed to pass from one pair of segments to another. If all edge weights of an undirected graph are positive, then the minimax distances between pairs of points (the maximum edge weights of minimax paths) form an ultrametric; conversely every finite ultrametric space comes from minimax distances in this way. A data structure constructed from the minimum spanning tree allows the minimax distance between any pair of vertices to be queried in constant time per query, using lowest common ancestor queries in a Cartesian tree. The root of the Cartesian tree represents the heaviest minimum spanning tree edge, and the children of the root are Cartesian trees recursively constructed from the subtrees of the minimum spanning tree formed by removing the heaviest edge. The leaves of the Cartesian tree represent the vertices of the input graph, and the minimax distance between two vertices equals the weight of the Cartesian tree node that is their lowest common ancestor. Once the minimum spanning tree edges have been sorted, this Cartesian tree can be constructed in linear time. Directed graphs In directed graphs, the maximum spanning tree solution cannot be used. Instead, several different algorithms are known; the choice of which algorithm to use depends on whether a start or destination vertex for the path is fixed, or whether paths for many start or destination vertices must be found simultaneously. All pairs The all-pairs widest path problem has applications in the Schulze method for choosing a winner in multiway elections in which voters rank the candidates in preference order. The Schulze method constructs a complete directed graph in which the vertices represent the candidates and every two vertices are connected by an edge. Each edge is directed from the winner to the loser of a pairwise contest between the two candidates it connects, and is labeled with the margin of victory of that contest. Then the method computes widest paths between all pairs of vertices, and the winner is the candidate whose vertex has wider paths to each opponent than vice versa. The results of an election using this method are consistent with the Condorcet method – a candidate who wins all pairwise contests automatically wins the whole election – but it generally allows a winner to be selected, even in situations where the Condorcet method itself fails. The Schulze method has been used by several organizations including the Wikimedia Foundation. To compute the widest path widths for all pairs of nodes in a dense directed graph, such as the ones that arise in the voting application, the asymptotically fastest known approach takes time where ω is the exponent for fast matrix multiplication. Using the best known algorithms for matrix multiplication, this time bound becomes . Instead, the reference implementation for the Schulze method uses a modified version of the simpler Floyd–Warshall algorithm, which takes time. For sparse graphs, it may be more efficient to repeatedly apply a single-source widest path algorithm. Single source If the edges are sorted by their weights, then a modified version of Dijkstra's algorithm can compute the bottlenecks between a designated start vertex and every other vertex in the graph, in linear time. The key idea behind the speedup over a conventional version of Dijkstra's algorithm is that the sequence of bottleneck distances to each vertex, in the order that the vertices are considered by this algorithm, is a monotonic subsequence of the sorted sequence of edge weights; therefore, the priority queue of Dijkstra's algorithm can be implemented as a bucket queue: an array indexed by the numbers from 1 to (the number of edges in the graph), where array cell contains the vertices whose bottleneck distance is the weight of the edge with position in the sorted order. This method allows the widest path problem to be solved as quickly as sorting; for instance, if the edge weights are represented as integers, then the time bounds for integer sorting a list of integers would apply also to this problem. Single source and single destination suggest that service vehicles and emergency vehicles should use minimax paths when returning from a service call to their base. In this application, the time to return is less important than the response time if another service call occurs while the vehicle is in the process of returning. By using a minimax path, where the weight of an edge is the maximum travel time from a point on the edge to the farthest possible service call, one can plan a route that minimizes the maximum possible delay between receipt of a service call and arrival of a responding vehicle. use maximin paths to model the dominant reaction chains in metabolic networks; in their model, the weight of an edge is the free energy of the metabolic reaction represented by the edge. Another application of widest paths arises in the Ford–Fulkerson algorithm for the maximum flow problem. Repeatedly augmenting a flow along a maximum capacity path in the residual network of the flow leads to a small bound, , on the number of augmentations needed to find a maximum flow; here, the edge capacities are assumed to be integers that are at most . However, this analysis does not depend on finding a path that has the exact maximum of capacity; any path whose capacity is within a constant factor of the maximum suffices. Combining this approximation idea with the shortest path augmentation method of the Edmonds–Karp algorithm leads to a maximum flow algorithm with running time . It is possible to find maximum-capacity paths and minimax paths with a single source and single destination very efficiently even in models of computation that allow only comparisons of the input graph's edge weights and not arithmetic on them. The algorithm maintains a set of edges that are known to contain the bottleneck edge of the optimal path; initially, is just the set of all edges of the graph. At each iteration of the algorithm, it splits into an ordered sequence of subsets of approximately equal size; the number of subsets in this partition is chosen in such a way that all of the split points between subsets can be found by repeated median-finding in time . The algorithm then reweights each edge of the graph by the index of the subset containing the edge, and uses the modified Dijkstra algorithm on the reweighted graph; based on the results of this computation, it can determine in linear time which of the subsets contains the bottleneck edge weight. It then replaces by the subset that it has determined to contain the bottleneck weight, and starts the next iteration with this new set . The number of subsets into which can be split increases exponentially with each step, so the number of iterations is proportional to the iterated logarithm function, , and the total time is . In a model of computation where each edge weight is a machine integer, the use of repeated bisection in this algorithm can be replaced by a list-splitting technique of , allowing to be split into smaller sets in a single step and leading to a linear overall time bound. Euclidean point sets A variant of the minimax path problem has also been considered for sets of points in the Euclidean plane. As in the undirected graph problem, this Euclidean minimax path problem can be solved efficiently by finding a Euclidean minimum spanning tree: every path in the tree is a minimax path. However, the problem becomes more complicated when a path is desired that not only minimizes the hop length but also, among paths with the same hop length, minimizes or approximately minimizes the total length of the path. The solution can be approximated using geometric spanners. In number theory, the unsolved Gaussian moat problem asks whether or not minimax paths in the Gaussian prime numbers have bounded or unbounded minimax length. That is, does there exist a constant such that, for every pair of points and in the infinite Euclidean point set defined by the Gaussian primes, the minimax path in the Gaussian primes between and has minimax edge length at most ? References Network theory Polynomial-time problems Graph algorithms Computational problems in graph theory
Widest path problem
Mathematics
2,481
72,658,320
https://en.wikipedia.org/wiki/Intercellular%20communication
Intercellular communication (ICC) refers to the various ways and structures that biological cells use to communicate with each other directly or through their environment. Often the environment has been thought of as the extracellular spaces within an animal. More broadly, cells may also communicate with other animals, either of their own group or species, or other species in the wider ecosystem. Different types of cells use different proteins and mechanisms to communicate with one another using extracellular signalling molecules or electric fluctuations which could be likened to an intercellular ethernet. Components of each type of intercellular communication may be involved in more than one type of communication, making attempts at clearly separating the types of communication listed somewhat futile. Broadly speaking, intercellular communication may be categorized as being within a single animal or between an animal and other animals in the ecosystem in which it lives. In this article, intercellular communication has been further collated into various areas of research rather than by functional or structural characteristics. Communication within an organism Cell signalling Molecular cell signaling Single-celled organisms sense their environment to seek food and may send signals to other cells to behave symbiotically or reproduce. A classic example of this is the slime mold. The slime mold shows how intercellular communication with a small molecule (e.g., cyclic AMP) allows a simple organism to form from an organized aggregation of single cells. Research into cell signalling investigated a receptor specific to each signal or multiple receptors potentially being activated by a single signal. It is not only the presence or absence of a signal that is important but also the strength. Using a chemical gradient to coordinate cell growth and differentiation continues to be important as multicellular animals and plants become more complex. This type of intercellular communication within an organism is commonly referred to as cell signalling. This type of intercellular communication is typified by a small signalling molecule diffusing through the spaces around cells, often relying on a diffusion gradient forming part of the signalling response. Cell junctions Complex organisms may have molecules to hold the cells together which can also be involved in intercellular communication. Some binding molecules are termed the extracellular matrix and may involve longer molecules like cellulose for the cell wall in plants or collagen in animals. When the membranes of two animal cells are close, they may form special types of cell junctions, which come in three broad types: occluding junctions (such as tight junctions and septate junctions), anchoring junctions (such as adherens junctions, desmosomes, focal adhesions, and hemidesmosomes), and communicating junctions (such as gap junctions). The structures they form also form parts of complex protein signaling pathways. In one respect, tight junctions play a generic role in cell signaling in that they may form a tight zip around cells, forming a barrier to stop even small, unwanted signalling molecules from getting between cells. Without these junctions, signalling molecules may spread to another group of cells which are not requiring the signal or escape too quickly from where they are needed. Gap junctions allow neighboring cells to directly exchange small molecules. Pannexins, connexins, innexins Pannexins, connexins, and innexins are transmembrane proteins that are all named after the Latin term nexus, meaning to connect. They are grouped as they all share a similar structure of 4 transmembrane domains crossing the cell membrane in a similar way, but they do not all share enough sequence homology to allow them to be considered directly related. Earlier investigations involving the connexins demonstrated cells forming a direct connection with each other using groups of connexins but not connections with the cell exterior. As such they were not considered to participate in the extracellular cell signalling at the time. Later studies made it apparent connexins could connect directly to the cell exterior meaning they are a conduit for the release an uptake of signalling molecules from the environment external to the cell. Furthermore, pannexins appear to do this to such an extent they may rarely if ever participate in direct cell to cell coupling. As indicated on the pannexin/innexin/connexin tree illustrated many animals do not appear to have pannexins/innexins/connexins, perhaps indicating there may be other similar proteins still to be discovered that serve to aid intercellular communication in these animals. Direct links between cells Septal pores In fungi, pores crossing their cell walls that separate cellular compartments act as an ICC for the movement of molecules to their neighboring compartments. Most red algae may have pores in the cell septum that partitions a cell/filament called a pit connection. As a leftover of the mitotic division it may be plugged up by the cell. There are also similar connections between neighboring cells/filaments that may allowing sharing of nutrients. Cells of a different species may initiate and form a pit connection with the host algae. Plasmodesmata in plants Plant cells usually have thick cell walls which need to be crossed if neighboring cells are to communicate directly. Plasmodesmata form a pipe through the cell wall forming an ICC. The pipe has another smaller membranous pipe concentric to it connecting the endoplasmic reticulum of the two cells via a tube called the desmotubule. The larger pipe also contains cytoskeletal and other elements. It is presumed viruses use plasmodesmata as a route through the cell walls to spread through the plant. Gap junctions in animals Gap junctions can form intercellular links, effectively a tiny direct regulated "pipe" called a connexon pair between the cytoplasms of the two cells that form the junction. 6 connexins make a connexon, 2 connexons make a connexon pair so 12 connexin proteins build each tiny ICC. This ICC allows two cells to communicate directly while being sealed from the outside world. Cells may form one or thousands of these tiny ICCs between them and their other neighbors, potentially forming large networks of directly linked cells. The connexon pairs form ICCs that can transport water, many other molecules up to around 1000 atoms in size and can be very rapidly signaled to turn on and off as required. These ICCs are also communicating electrical signals that can be rapidly turned on and off. To add to their versatility there are a range of these ICC types due to their being over 20 different connexins with different properties that can combine with each other in a variety of ways. The variety of potential signaling combinations that results is enormous. A much studied example of gap junctions electrical signalling abilities is in the electrical synapses found on nerves. In heart muscle gap junctions function to coordinate the beating of the heart. Adding even further to their versatility gap junctions can also function to form a direct connection to the exterior of a cell paralleling the functioning of the protein cousin the pannexins which are explained elsewhere. Intercellular bridge Intercellular bridges are larger than gap junction ICCs so are able to allow the movement of not only small signaling molecules but also large DNA molecules or even whole cell organelles. They are maintained between two cells allowing them to exchange cytoplasmic contents and are frequently observed when cells need intimate communication such as when they are reproducing. They are found in Prokaryotes for exchanging DNA, small organisms such as Pinnularia, Valonia ventricosa, Volvox, C. elegans and mitosis generally (Cytokinesis), Blepharisma for sexual reproduction and during Meiosis including Spermatocytogenesis to synchronise development of germ cells and oogenesis in larger organisms. Bridges have shown to assist in cell migration as shown in the adjacent picture. Cytoplasmic bridges can also be used to attack another cell as in the case of Vampirococcus. Cell fusion Cells that require a more permanent, extensive cytoplasmic linkage may fuse with each other to varying degrees in many cases forming one large cell or syncytium. This happens extensively during the development of skeletal muscle forming large muscle fibers. Later it was confirmed in other tissues such as the eye lens. Though both involving cell fibers, in the case of the eye lens the cell fusion is more limited in scope resulting in a less extensively fused stratified syncytium. Vesicles Lipid membrane bound vesicles of a large range of sizes are found inside and outside of cells, containing a huge variety of things ranging from food to invading organisms, water to signaling molecules. Using an electrical nerve impulse from a neuron of a neuromuscular junction to stimulate a muscle to contract is an example of very small (about 0.05μm) vesicles being directly involved in regulating intercellular communication. The neuron produces thousands of tiny vesicles, each containing thousands of signalling molecules. One vesicle is released close to the muscle every second or so when resting. When activated by a nerve impulse more than 100 vesicles will be released at once, hundreds of thousands of signalling molecules, causing a significant contraction of the muscle fiber. All this happens in a small fraction of a second. Generally small vesicles used to transport signalling molecules released from the cell are termed exosomes or simply extracellular vesicles (EV), and in addition to their importance to the organism they are also important for biosensors. Extracellular vesicles can be released from malignant cancer cells. These extracellular vesicles have been shown to contain gap junction proteins over-expressed in the malignant cells that spread to non-cancerous cells appearing to enhance the spread of the malignancy. Vesicles are also associated with the transport of materials outside of the cell to enable growth and repair of tissues in the extracellular matrix. In situations such as these they may be given special designations such as Matrix Vesicles (MV). Examples of larger vesicles are in regulatory secretary pathways in endocrine, exocrine tissues, transcytosis and the vesiculo-vacuolar organelle (VVO) in endothelial and perhaps other cell types. Another form of transfer of pieces of membrane around junctions is called trans-endocytosis. Some large intercellular vesicles also appear to stay intact as they transport their contents from one part of a tissue to another and involve gap junction plaques. Communication in nervous systems When we think of intercellular communication we often use our nervous system as a point of reference. Nerves made up of many cells in vertebrates are typically highly specialized in form and function usually being the most complex in the brain. They ensure rapid precise, directional cell to cell communication over longer distances, for example from your brain to your hand. The nerve cells can be thought of as intermediary's, not so much communicating with each other but rather passing on the messages from one neighboring cell to another. Being "accessory" cells that pass on the message they require an additional space and can consume a lot of energy within an organism. Simpler organisms such as sponges and placozoans often have less food availability and so less energy to spare. Their nervous systems are less specialized and the cells that are part of it are required to do other functions as well. Ephaptic coupling When groups of nerve cells form another type of intercellular communication called ephaptic coupling can arise. It was first quantified by Katz in 1940 but it has been difficult to associate any one structure or "ephapse" with this form of communication. There are reductionist attempts to associate particular groups of nerve cells exhibiting ephaptic coupling with particular functions in the brain. As yet there are no studies on the simplest neural systems such as the polar bodies of Ctenophores to see if ephaptic coupling may explain some of their more complex behaviors. Ecosystem intercellular communication The definition of biological communication is not simple. In the field of cell biology early research was at a cellular to organism level. How the individual cells in one organism could affect those in another was difficult to trace and not of primary concern. If intercellular communication includes one cell transmitting a signal to another to elicit a response, intercellular communication is not restricted to the cells within a single organism. Over short distances interkingdom communication in plants is reported. In-water reproduction often involves vast synchronized release of gametes called spawning. Over large distances cells in one plant will communicate with cells in another plant of the same species and other species by releasing signals into the air such as green leaf volatiles that can, among other things, pre-warn neighbors of herbivores or in the case of ethylene gas the signal triggers ripening in fruits. Intercellular signalling in plants can also happen below ground with the mycorrhizal network which can link large areas of plants via fungal networks allowing the redistribution of environmental resources. Looking at insect colonies such as bees and ants we have discovered the pheromones released from one organism's cells to another organism's cells can coordinate colonies in a way reminiscent of slime molds. Cell to cell signalling using "pheromones" was also found in more complex animals. As complexity increases so does the effect of signals. "Pheromones" in more complex animals such as vertebrates are now more correctly referred to as "chemosignals" including between species. The idea that intercellular communication is so similar among cells within an organism as well as cells between different organisms, even prey, is demonstrated by vinnexin. This protein is a modified form of an innexin protein found in a caterpillar. That is, the vinnexin is very similar to the caterpillar's own innexin, and could only have been derived from a non-viral innexin in some way that is unclear. The caterpillar innexin forms normal intercellular connections inside the caterpillar as part of the caterpillar's immune response to an egg implanted by a parasitic wasp. The innexin helps ensure the wasp egg is neutralized, saving the caterpillar from the parasite. So what does the vinnexin do and how? Evolution has led to a virus that communicates with the wasp in a way that evades the wasps antiviral responses, allowing the virus to live and replicate in the wasps ovaries. When the wasp injects its egg into the caterpillar host many virus from the wasp's ovary are also injected. The virus particles do not replicate in the caterpillar cells but rather communicate with the caterpillars genetic machinery to produce vinnexin protein. The vinnexin protein incorporates itself into the caterpillar's cells altering the communication in the caterpillar so the caterpillar goes on living but with an altered immune response. Vinnexins are able to mix with normal innexins to alter communication within the caterpillar and probably do. The altered communication within the caterpillar prevents the caterpillar's defenses rejecting the wasps egg. As a result, the wasp egg hatches, consumes the caterpillar and the virus from the wasp larva's mother, and repeats the cycle. It can be seen the virus and wasp are essential to each other and communicate well with each other to allow the virus to live and replicate, but only in a non-destructive way inside the wasp ovary. The virus is injected into a caterpillar by the wasp, but the virus does not replicate in the caterpillar, the virus only communicates with the caterpillar to modify it in a non-lethal way. The wasp larvae will then slowly eat the caterpillar without being stopped while communicating with the virus again to ensure that the wasp has a place in its ovary for it to again replicate. Connexins/innexins/vinnexins, once thought to only participate in providing a path for signaling molecules or electrical signals have now been shown to act as a signaling molecule itself. References Cell biology Cell communication Cell anatomy Cell signaling Systems biology
Intercellular communication
Biology
3,297
40,820,811
https://en.wikipedia.org/wiki/Boxcar%20averager
A boxcar averager, gated integrator or boxcar integrator is an electronic test instrument that integrates the signal input voltage after a defined waiting time (trigger delay) over a specified period of time (gate width) and then averages over multiple integration results (samples) – for a mathematical description see boxcar function. The main purpose of this measurement technique is to improve signal to noise ratio in pulsed experiments with often low duty cycle by the following three mechanisms: 1) signal integration acts as a first averaging step that strongly suppresses noise components with a frequency of the reciprocal gate width and higher, 2) time-domain based selection of signal parts that actually carry information of interest and neglect of all signal parts where only noise is present, and 3) averaging over a defined number of periods provides low-pass filtering and convenient adjustment of time resolution. Similar to lock-in amplifiers, boxcar averagers are mostly used for the analysis of periodic signals. Whereas the lock-in can be understood as sophisticated band pass filters with adjustable center frequency and bandwidth, the boxcar averager allows to define the signal of interest and resulting time resolution mostly in the time domain. Principle of operation The boxcar operation is defined by a trigger delay, a gate width and the number of trigger events (i.e., samples) that are averaged over in the buffer. The principle of operation can be understood as a two-step process: signal integration over the desired gate width and averaging the integrated signal over a defined amount of periods/trigger events Considering a simple implementation of the core circuitry looks like regular RC low-pass filter that can be gated by a switch S. Provided the filter time constant τ = RC is set to sufficiently large values with respect to the gate width, the output voltage is to a good approximation the integral of the input signal with a signal bandwidth of B = 1/(4RC). The output of this filter can then be subjected to another analog circuit for subsequent averaging. After each trigger event this sampling circuit has to be set back before receiving the next pulse. The time it takes for this reset is one of the major speed limitations for analog implementations, where maximum trigger rates of a few 10 kHz are typical even though the gate width itself can be as low as a few ten picoseconds and delay is set to zero. History The origin of the boxcar averager dates back to as early as 1950 where the technique helped to improve signal quality in experiments investigating nuclear magnetic resonances with pulsed schemes.). The first reported application of "boxcar circuits to NMR was Holcomb and Norberg". In their 1955 paper Holcomb and Norberg credit the invention of the “boxcar integrator” to a large extent to L. S. Kypta and H. W. Knoebel. References External links What is a Boxcar Averager? Electronic test equipment
Boxcar averager
Technology,Engineering
591
3,610,026
https://en.wikipedia.org/wiki/Jabol
Jabol () is a slang name for a kind of cheap Polish fruit wine. It is made from fermented fruit and is bottled at 8% to 18% alcohol by volume. Its name is derived from , "apple", from which it is often made. Though it is usually fruit flavoured, it can come in other flavours such as chocolate or mint. It comes in a variety of containers and is sold under a variety of names. History Jabol was first developed in post-war Poland as a cheap alcohol produced from the apple orchards that had been cultivated in the former-Prussian areas of the Recovered Territories. The drink gained a reputation as an unsophisticated alcoholic beverage consumed by youths intending to get drunk quickly and cheaply. Slang names Apart from jabol or jabcok, this beverage has amassed a variety of colourful slang names. Two that are commonly encountered are sikacz (a reference to the effect of alcohol on urination) and siarkofrut (a reference to the Bobofrut brand of children's fruit juice, as well as to the wine's taste of sulfur, a result from its low-quality production process). Packaging and price Jabol is sold in glass and plastic bottles or cartons (similar to milk or juice cartons). Sometimes a deposit is required on bottles, which is usually 20–30% of the wine price. In popular culture Pieniądze to nie wszystko – comedy film by Juliusz Machulski from 2000 Jabol punk, Jabolowe ofiary (Jabol victims or Jabol losers) – songs by KSU from the album Pod prąd. Tanie Wino (Cheap wine) – song by Haratacze SO2 – song by Zielone Żabki (sulfur dioxide reference) Acid Drinkers – Polish thrash metal band. The name is a reference to the drink. Autobiografia – one of the most popular songs by the Polish band Perfect. Arizona – documentary by Ewa Borzęcka from 1997, showing life in poor Polish village. O Jeden Most Za Daleko (One bridge too far) – song from 2022 by a heavy metal band Nocny Kochanek. See also Flavored fortified wines Jug wine Plonk (wine) Bormotukha, Russian equivalent References External links Polish site about jabol Fermented drinks Polish alcoholic drinks Fruit wines
Jabol
Biology
497
7,661,732
https://en.wikipedia.org/wiki/-al
In chemistry, the suffix -al is the IUPAC nomenclature used in organic chemistry to form names of aldehydes containing the -(CO)H group in the systematic form. It was extracted from the word "aldehyde". With the exception of chemical compounds having a higher priority than it, all aldehydes is named with -al, such as 'propanal'. Some aldehydes also have common names, such as formaldehyde for methanal, acetaldehyde for ethanal. Benzaldehyde does not have a systematic form with -al. References al
-al
Chemistry
127
41,676,232
https://en.wikipedia.org/wiki/COS%20%28operating%20system%29
COS (China Operating System) is a Linux kernel-based mobile operating system developed in China mainly targeting mobile devices, tablets and set-top boxes. It is being developed by the Institute of Software at the Chinese Academy of Sciences (ISCAS) together with Shanghai Liantong Network Communications Technology to compete with foreign operating systems like iOS and Android. The operating system is based on Linux but the platform is closed source. Security and the risk of back doors in devices from foreign vendors are some of the main motivations for COS. Android had almost 90% of the smart phone market when COS was introduced and Apple most of the remaining market share. COS looks very similar to Android and has its own Application Portal much like Android Market and iOS App Store. Licensing The Linux kernel is GPLv2 and the COS framework is closed source and licensed by the originators. Core OS The COS Core operating system is based on the Linux kernel and can be seen as a Linux distribution, in the same way as the Android operating system. API According to the official statements, COS supports HTML5 based applications and Java based applications. See also Kylin Linux Deepin Zorin OS StartOS Comparison of mobile operating systems List of free and open source Android applications References External links Demonstration video Mobile Linux ARM Linux distributions Linux distributions
COS (operating system)
Technology
268
13,253,580
https://en.wikipedia.org/wiki/False%20door
A false door, or recessed niche, is an artistic representation of a door which does not function like a real door. They can be carved in a wall or painted on it. They are a common architectural element in the tombs of ancient Egypt, but appeared possibly earlier in some Pre-Nuragic Sardinian tombs known as Domus de Janas. Later they also occur in Etruscan tombs and in the time of ancient Rome they were used in the interiors of both houses and tombs. Mesopotamian origin Egyptian architecture was influenced by Mesopotamian precedents, as it adopted elements of Mesopotamian Temple and civic architecture. These exchanges were part of Egypt-Mesopotamia relations since the 4th millennium BCE. Recessed niches were characteristic of Mesopotamian Temple architecture, and were adopted in Egyptian architecture, especially for the design of Mastaba tombs, during the First Dynasty and the Second Dynasty, from the time of the Naqada III period (circa 3000 BCE). It is unknown if the transfer of this design was the result of Mesopotamian workmen in Egypt, or if temple designs appearing on imported Mesopotamian seals may have been a sufficient source of inspiration for Egyptian architects. Ancient Egypt The ancient Egyptians believed that the false door was a threshold between the worlds of the living and the dead and through which a deity or the spirit of the deceased could enter and exit. The false door was usually the focus of a tomb's offering chapel, where family members could place offerings for the deceased on a special offering slab placed in front of the door. Most false doors are found on the west wall of a funerary chapel or offering chamber because the Ancient Egyptians associated the west with the land of the dead. In many mastabas, both husband and wife buried within have their own false door. Structure A false door usually is carved from a single block of stone or plank of wood, and it was not meant to function as a normal door. Located in the center of the door is a flat panel, or niche, around which several pairs of door jambs are arranged—some convey the illusion of depth and a series of frames, a foyer, or a passageway. A semi-cylindrical drum, carved directly above the central panel, was used in imitation of the reed-mat that was used to close real doors. The door is framed with a series of moldings and lintels as well, and an offering scene depicting the deceased in front of a table of offerings usually is carved above the center of the door. Sometimes, the owners of the tomb had statues carved in their image placed into the central niche of the false door. Historical development The configuration of the false door, with its nested series of doorjambs, is derived from the niched palace façade and its related slab stela, which became a common architectural motif in the early Dynastic period. The false door was used first in the mastabas of the Third Dynasty of the Old Kingdom (c. 27th century BCE) and its use became nearly universal in tombs of the fourth through sixth dynasties. Rarely, the Old Kingdom false door was combined with statues, demonstrating the common ancestry of the false door and naos in similar early ancient Egyptian architectural features. During the nearly one hundred and fifty years spanning the reigns of the sixth Dynasty pharaohs Pepi I, Merenre, and Pepi II, the false door motif went through a sequential series of changes affecting the layout of the panels, allowing historians to date tombs based on which style of false door was used. The same dating approach is used also for the First Intermediate Period. After the First Intermediate Period, the popularity of the false doors diminished, being replaced by stelae as the primary surfaces for writing funerary inscriptions. Representations of false doors also appeared on Middle Kingdom coffins such as the Coffin of Nakhtkhnum (MET 15.2.2a, b) dating to late Dynasty 12 (). Here, the false door is represented by two wooden doors that are secured with door bolts, bracketed on both sides by architectural niching – recalling earlier niched temple and palace façades such as the enclosure wall that surrounds the mortuary complex of king Djoser of the Third Dynasty. In a similar manner to the Old Kingdom false doors, representations of false doors on Middle Kingdom coffins facilitated the movement of the deceased's spirit between the afterlife and the world of the living. Inscriptions The side panels usually are covered in inscriptions naming the deceased along with their titles, and a series of standardized offering formulas. These texts extol the virtues of the deceased and express positive wishes for the afterlife. For example, the false door of Ankhires reads: The lintel reads: The left and right outer jambs read: Prehistoric Sardinia Carved or painted Pre-Nuragic false doors appear in about 20 tombs mostly located in northwestern Sardinia, an example being some of the Domus de Janas of the necropolis of Anghelu Ruju, which are variously datable from the Ozieri to the Bonnanaro cultures of Pre-Nuragic Sardinia (). These false doors, apparently resulting from a strong Eastern influence, usually appear on the back wall of the main chamber, and are represented by horizontal and vertical frames and a projecting lintel. Sometimes the door is topped with painted or carved U-shaped bull horns, inscribed inside each other in a variable number. Unlike the Egyptian ones, the meaning of pre-Nuragic false doors is less clear. It has been argued that these represents the passageway to the afterlife that definitively separate the deceased from the living loved ones, also preventing a possible return. Alternatively, it is possible that these false doors are simply clues of the plan of the corresponding former house of the deceased. Etruria In Etruscan tombs the false door has a Doric design and is always depicted closed. Most often it is painted, but on some occasions it is carved in relief, like in the Tomb of the Charontes at Tarquinia. Unlike the false door in ancient Egyptian tombs, the Etruscan false door has given rise to a diversity of interpretations. It might have been the door to the underworld, similar to its use of the ancient Egypt. It could have been used to mark the place where a new doorway and chamber would be carved for future expansion of the tomb. Another possibility is that it is the door of the tomb itself, as seen from outside. In the Tomb of the Augurs at Tarquinia two men are painted to the left and right of a false door. Their gestures of lamentation indicate that the deceased were considered to be behind the door. Ancient Rome Painted doors were used frequently in the decoration of both First- and Second style interiors of Roman villas. An example is the villa of Julius Polybius in Pompeii, where a false door is painted on a wall opposite a real door to achieve symmetry. Apart from creating architectural balance, they could serve to make the villa seem larger than it really was. See also Monumental inscription References Further reading External links Ancient Egypt from A to Z Doors Architectural elements Egyptian artefact types Burial monuments and structures Naqada III
False door
Technology,Engineering
1,448
6,269,788
https://en.wikipedia.org/wiki/Microsoft%20Forefront
Microsoft Forefront is a discontinued family of line-of-business security software by Microsoft Corporation. Microsoft Forefront products are designed to help protect computer networks, network servers (such as Microsoft Exchange Server and Microsoft SharePoint Server) and individual devices. As of 2015, the only actively developed Forefront product is Forefront Identity Manager. Components Forefront includes the following products: Identity Manager: State-based identity management software product, designed to manage users' digital identities, credentials and groupings throughout the lifecycle of their membership of an enterprise computer system Rebranded System Center Endpoint Protection: A business antivirus software product that can be controlled over the network, formerly known as Forefront Endpoint Protection, Forefront Client Security and Client Protection. Exchange Online Protection: A software as a service version of Forefront Protect for Exchange Server: Instead of installing a security program on the server, the customer re-routes its email traffic to the Microsoft online service before receiving them. Discontinued Threat Management Gateway: Discontinued server product that provides three functions: Routing, firewall and web cache. Formerly called Internet Security and Acceleration Server or ISA Server. Unified Access Gateway: Discontinued server product that protects network assets by encrypting all inbound access request from authorized users. Supports Virtual Private Networks (VPN) and DirectAccess. Formerly called Intelligent Application Gateway. Server Management Console: Discontinued web-based application that enables management of multiple instances of Protection for Exchange, Protection for SharePoint and Microsoft Antigen from a single interface. Protection for Exchange: A discontinued software product that detects viruses, spyware, and spam by integrating multiple scanning engines from security partners in a single solution to protect Exchange messaging environments. FPE provides an administration console that includes customizable configuration settings, filtering options, monitoring features and reports, and integration with the Forefront Online Protection for Exchange (FOPE) product. After installation, managing FPE on multiple Exchange servers can be done with the Protection Server Management Console. Additionally, FPE can be managed using Windows PowerShell, a command-line shell and task-based scripting technology that enables the automation of system administration tasks. Protection for SharePoint: A discontinued product that protects Microsoft SharePoint Server document libraries. It enforces rules that prevent documents containing malware, sensitive information, or out-of-policy content from being uploaded. Protection Server Management Console or Windows PowerShell can be used to manage Protection for SharePoint Server on multiple servers. Security for Office Communications Server: Protects computers running Microsoft Office Communications Server from malware. Formerly called Antigen for Instant Messaging. History The predecessor to the Forefront server protection products was the Antigen line of antivirus products created by Sybari Software. Sybari was acquired by Microsoft in 2005, and the first Microsoft-branded version of the product was called Microsoft Forefront Security for SharePoint (FSSP) Version 10. FSSP Version 10 supports Microsoft Office SharePoint Server 2007 or Microsoft Windows SharePoint Services version 3, whereas FPSP (the last version of the product) supports Microsoft Office SharePoint Server 2010, Microsoft SharePoint Foundation 2010, Microsoft Office SharePoint Server 2007 SP1, or Windows SharePoint Services version 3 SP1. See also Microsoft Servers References External links Microsoft Forefront Server Protection Blog on TechNet Blogs Computer security software Forefront
Microsoft Forefront
Engineering
654
5,171,756
https://en.wikipedia.org/wiki/4%20Centauri
4 Centauri is a star in the constellation Centaurus. It is a blue-white B-type subgiant with an apparent magnitude of +4.75 and is approximately 640 light years from Earth. 4 Centauri is a hierarchical quadruple star system. The primary component of the system, 4 Centauri A, is a spectroscopic binary, meaning that its components cannot be resolved but periodic Doppler shifts in its spectrum show that it must be orbiting. 4 Centauri A has an orbital period of 6.927 days and an eccentricity of 0.23. Because light from only one of the stars can be detected (i.e. it is a single-lined spectroscopic binary), some parameters such as its inclination are unknown. The secondary component, is also a single-lined spectroscopic binary. It has an orbital period of 4.839 days and an eccentricity of 0.05. The secondary component is a metallic-lined A-type star. The two pairs themselves are separated by 14 arcseconds; one orbit would take at least 55,000 years. References Centaurus B-type subgiants Centauri, h Centauri, 4 Spectroscopic binaries 4 5221 120955 067786 Durchmusterung objects
4 Centauri
Astronomy
274
5,622,569
https://en.wikipedia.org/wiki/Hansen%20solubility%20parameter
Hansen solubility parameters were developed by Charles M. Hansen in his Ph.D thesis in 1967 as a way of predicting if one material will dissolve in another and form a solution. They are based on the idea that like dissolves like where one molecule is defined as being 'like' another if it bonds to itself in a similar way. Specifically, each molecule is given three Hansen parameters, each generally measured in MPa0.5: The energy from dispersion forces between molecules The energy from dipolar intermolecular forces between molecules The energy from hydrogen bonds between molecules. These three parameters can be treated as co-ordinates for a point in three dimensions also known as the Hansen space. The nearer two molecules are in this three-dimensional space, the more likely they are to dissolve into each other. To determine if the parameters of two molecules (usually a solvent and a polymer) are within range, a value called interaction radius () is given to the substance being dissolved. This value determines the radius of the sphere in Hansen space and its center is the three Hansen parameters. To calculate the distance () between Hansen parameters in Hansen space the following formula is used: Combining this with the interaction radius gives the relative energy difference (RED) of the system: If the molecules are alike and will dissolve If the system will partially dissolve If the system will not dissolve Uses Historically Hansen solubility parameters (HSP) have been used in industries such as paints and coatings where understanding and controlling solvent–polymer interactions was vital. Over the years their use has been extended widely to applications such as: Environmental stress cracking of polymers Controlled dispersion of pigments, such as carbon black Understanding of solubility/dispersion properties of carbon nanotubes, Buckyballs, and quantum dots Adhesion to polymers Permeation of solvents and chemicals through plastics to understand issues such as glove safety, food packaging barrier properties and skin permeation Diffusion of solvents into polymers via understanding of surface concentration based on RED number Cytotoxicity via interaction with DNA Artificial noses (where response depends on polymer solubility of the test odor) Safer, cheaper, and faster solvent blends where an undesirable solvent can be rationally replaced by a mix of more desirable solvents whose combined HSP equals the HSP of the original solvent. Theoretical context HSP have been criticized for lacking the formal theoretical derivation of Hildebrand solubility parameters. All practical correlations of phase equilibrium involve certain assumptions that may or may not apply to a given system. In particular, all solubility parameter-based theories have a fundamental limitation that they apply only to associated solutions (i.e., they can only predict positive deviations from Raoult's law): they cannot account for negative deviations from Raoult's law that result from effects such as solvation (often important in water-soluble polymers) or the formation of electron donor acceptor complexes. Like any simple predictive theory, HSP are best used for screening with data used to validate the predictions. Hansen parameters have been used to estimate Flory-Huggins Chi parameters, often with reasonable accuracy. The factor of 4 in front of the dispersion term in the calculation of Ra has been the subject of debate. There is some theoretical basis for the factor of four (see Ch 2 of Ref 1 and also. However, there are clearly systems (e.g. Bottino et al., "Solubility parameters of poly(vinylidene fluoride)" J. Polym. Sci. Part B: Polymer Physics 26(4), 785-79, 1988) where the regions of solubility are far more eccentric than predicted by the standard Hansen theory. HSP effects can be over-ridden by size effects (small molecules such as methanol can give "anomalous results"). It has been shown that it is possible to calculate HSP via molecular dynamics techniques, though currently the polar and hydrogen bonding parameters cannot reliably be partitioned in a manner that is compatible with Hansen's values. Limitations The following are limitations according to Hansen: The parameters will vary with temperature The parameters are an approximation. Bonding between molecules is more subtle than the three parameters suggest. Molecular shape is relevant, as are other types of bonding such as induced dipole, metallic and electrostatic interactions. The size of the molecules also plays a significant role in whether two molecules actually dissolve in a given period. The parameters are hard to measure. 2008 work by Abbott and Hansen has helped address some of the above issues. Temperature variations can be calculated, the role of molar volume ("kinetics versus thermodynamics") is clarified, new chromatographic ways to measure HSP are available, large datasets for chemicals and polymers are available, 'Sphere' software for determining HSP values of polymers, inks, quantum dots etc. is available (or easy to implement in one's own software) and the new Stefanis-Panayiotou method for estimating HSP from Unifac groups is available in the literature and also automated in software. All these new capabilities are described in the e-book, software, datasets described in the external links but can be implemented independently of any commercial package. Sometimes Hildebrand solubility parameters are used for similar purposes. Hildebrand parameters are not suitable for use outside their original area which was non-polar, non-hydrogen-bonding solvents. The Hildebrand parameter for such non-polar solvents is usually close to the Hansen value. A typical example showing why Hildebrand parameters can be unhelpful is that two solvents, butanol and nitroethane, which have the same Hildebrand parameter, are each incapable of dissolving typical epoxy polymers. Yet a 50:50 mix gives a good solvency for epoxies. This is easily explainable knowing the Hansen parameter of the two solvents and that the Hansen parameter for the 50:50 mix is close to the Hansen parameter of epoxies. See also Solvent (has a chart of Hansen solubility parameters for various solvents) Hildebrand solubility parameter MOSCED References External links Interactive web app for finding solvents with matching solubility parameters Link Physical chemistry Polymer chemistry 1967 in science
Hansen solubility parameter
Physics,Chemistry,Materials_science,Engineering
1,305
36,118,235
https://en.wikipedia.org/wiki/Kosmos%201261
Kosmos 1261 ( meaning Cosmos 1261) was a Soviet US-K missile early warning satellite which was launched in 1981 as part of the Soviet military's Oko programme. The satellite was designed to identify missile launches using optical telescopes and infrared sensors. Kosmos 1261 was launched from Site 41/1 at Plesetsk Cosmodrome in the Russian SSR. A Molniya-M carrier rocket with a 2BL upper stage was used to perform the launch, which took place at 09:40 UTC on 31 March 1981. The launch successfully placed the satellite into a molniya orbit. It subsequently received its Kosmos designation, and the international designator 1981-031A. The United States Space Command assigned it the Satellite Catalog Number 12376. Kosmos 1261 was a US-K satellite like Kosmos 862 that self-destructed in orbit, NASA believe deliberately. The spacecraft attempted to maneuver from its transfer orbit to an operational orbit 3 days after launch, but it appears that the maneuver was unsuccessful, and the spacecraft never became ground track-stabilized. Immediately after the maneuver some debris was detected, while additional debris were discovered in mid-May. There may have been more than one debris event. All of the resultant debris is still in orbit. See also 1981 in spaceflight List of Kosmos satellites (1251–1500) List of Oko satellites List of R-7 launches (1980-1984) References Kosmos satellites Oko Spacecraft launched in 1981 Spacecraft launched by Molniya-M rockets Spacecraft that broke apart in space
Kosmos 1261
Technology
327
25,944,272
https://en.wikipedia.org/wiki/Table%20of%20costs%20of%20operations%20in%20elliptic%20curves
Elliptic curve cryptography is a popular form of public key encryption that is based on the mathematical theory of elliptic curves. Points on an elliptic curve can be added and form a group under this addition operation. This article describes the computational costs for this group addition and certain related operations that are used in elliptic curve cryptography algorithms. Abbreviations for the operations The next section presents a table of all the time-costs of some of the possible operations in elliptic curves. The columns of the table are labelled by various computational operations. The rows of the table are for different models of elliptic curves. These are the operations considered: DBL – Doubling ADD – Addition mADD – Mixed addition: addition of an input that has been scaled to have Z-coordinate 1. mDBL – Mixed doubling: doubling of an input that has been scaled to have Z-coordinate 1. TPL – Tripling. DBL+ADD – Combined double-and-add step To see how adding (ADD) and doubling (DBL) points on elliptic curves are defined, see The group law. The importance of doubling to speed scalar multiplication is discussed after the table. For information about other possible operations on elliptic curves see http://hyperelliptic.org/EFD/g1p/index.html. Tabulation Under different assumptions on the multiplication, addition, inversion for the elements in some fixed field, the time-cost of these operations varies. In this table it is assumed that: I = 100M, S = 1M, ×param = 0M, add = 0M, ×const = 0M This means that 100 multiplications (M) are required to invert (I) an element; one multiplication is required to compute the square (S) of an element; no multiplication is needed to multiply an element by a parameter (×param), by a constant (×const), or to add two elements. For more information about other results obtained with different assumptions, see http://hyperelliptic.org/EFD/g1p/index.html Importance of doubling In some applications of elliptic curve cryptography and the elliptic curve method of factorization (ECM) it is necessary to consider the scalar multiplication . One way to do this is to compute successively: But it is faster to use double-and-add method; for example, . In general, to compute , write with and and , then: This simple algorithm takes at most steps, and each step consists of a doubling and (if ) adding two points. So, this is one of the reasons why addition and doubling formulas are defined. Furthermore, this method is applicable to any group and if the group law is written multiplicatively; the double-and-add algorithm is then called square-and-multiply algorithm. References http://hyperelliptic.org/EFD/g1p/index.html Elliptic curve cryptography Finite fields Computational number theory Cryptographic attacks Elliptic curves
Table of costs of operations in elliptic curves
Mathematics,Technology
616
24,156,005
https://en.wikipedia.org/wiki/C17H14N2
{{DISPLAYTITLE:C17H14N2}} The molecular formula C17H14N2 (molar mass: 246.31 g/mol, exact mass: 246.1157 u) may refer to: 3,3'-Diindolylmethane (DIM) Ellipticine Olivacine Molecular formulas
C17H14N2
Physics,Chemistry
72
830,651
https://en.wikipedia.org/wiki/Biotechnology%20and%20Biological%20Sciences%20Research%20Council
Biotechnology and Biological Sciences Research Council (BBSRC), part of UK Research and Innovation, is a non-departmental public body (NDPB), and is the largest UK public funder of non-medical bioscience. It predominantly funds scientific research institutes and university research departments in the UK. Purpose Receiving its funding through the science budget of the Department for Science, Innovation and Technology, BBSRC's mission is to "promote and support, by any means, high-quality basic, strategic and applied research and related postgraduate training relating to the understanding and exploitation of biological systems". Structure BBSRC's head office is at Polaris House in Swindon - the same building as the other councils of UK Research and Innovation, AHRC EPSRC, ESRC, Innovate UK, MRC, NERC, Research England and STFC, as well as the UKSA. Funded by Government, BBSRC invested over £498 million in bioscience in 2017–18. BBSRC also manages the joint Research Councils' Office in Brussels – the UK Research Office (UKRO). History BBSRC was created in 1994, merging the former Agricultural and Food Research Council (AFRC) and taking over the biological science activities of the former Science and Engineering Research Council (SERC). Chairs Sir Alistair Grant (1994–1998) Dr Peter Doyle (1998–2003) Dr Peter Ringrose (2003–2009) Prof Sir Tom Blundell (2009–2015) Prof Sir Gordon Duff (2015–present) Chief executives Prof (now Sir) Tom Blundell (1994–1996) Prof Ray Baker (1996–2002) Prof (now Dame) Julia Goodfellow (2002–2007) Prof Douglas Kell (2008–2013) Dr Jackie Hunter (from 21 October 2013) Prof Melanie Welham (2016–2018) Executive chairs Prof Melanie Welham (2018–2023) Prof Guy Poppy (2023–2024) Prof Anne Ferguson-Smith (2024–) (from 1 July 2024) Governance and management BBSRC is managed by the BBSRC Council consisting of a chair (Professor Martin Humphries), an executive chair (Professor Guy Poppy) and from ten to eighteen representatives from UK universities, government and industry. The council approves policies, strategy, budgets and major funding. A research panel provides expert advice which BBSRC Council draws upon in making decisions. The purpose of the research panel is to advise on: the development and implementation of the council's strategic plans the competitiveness, relevance, economic impact, and societal considerations of the science and innovation activities funded by BBSRC opportunities for partnership with national and international organisations Boards, panels and committees In addition to the council and the research panel, BBSRC has a series of other internal bodies for specific purposes. Appointments Board Remuneration Board Strategy Advisory Panels – eight panels advise and report to the BBSRC Executive Chair Research Committees – five committees award research grants in specific science areas Institutes The council strategically funds eight research institutes in the UK, and a number of centres. They have strong links with business, industry and the wider community, and support policy development. The institutes' research underpins key sectors of the UK economy such as agriculture, bioenergy, biotechnology, food and drink and pharmaceuticals. In addition, the institutes maintain unique research facilities of national importance. Babraham Institute (BI) (Cambridge) Earlham Institute (EI) (formerly The Genome Analysis Centre) (Norwich) The Institute of Biological, Environmental and Rural Sciences (IBERS), part of Aberystwyth University (Aberystwyth) John Innes Centre (JIC) (Norwich) The Pirbright Institute (Pirbright), formerly the Institute for Animal Health (IAH) Quadram Institute (Norwich), formerly the Institute of Food Research The Roslin Institute (RI) (Midlothian), part of the University of Edinburgh Rothamsted Research (Harpenden and North Wyke) Other research institutes have merged with each other or with local universities. Previous BBSRC (or AFRC) sponsored institutes include: Institute of Grassland and Environmental Research (IGER – Aberystwyth), merged with the University of Aberystwyth 2008 Letcombe Laboratory Long Ashton Research Station (LARS – Bristol) the Plant Breeding Institute (PBI – Cambridge) the Weed Research Organisation (WRO – Oxford) Silsoe Research Institute (SRI – Bedfordshire) was closed in 2006. References Biological research institutes in the United Kingdom Biology education in the United Kingdom Biotechnology in the United Kingdom Biotechnology organizations Government agencies established in 1994 Life sciences industry Organisations based in Swindon Organizations established in 1994 Research institutes in the United Kingdom Science and technology in the United Kingdom UK Research and Innovation 1994 establishments in the United Kingdom
Biotechnology and Biological Sciences Research Council
Engineering,Biology
985
31,671,324
https://en.wikipedia.org/wiki/Mennicke%20symbol
In mathematics, a Mennicke symbol is a map from pairs of elements of a number field to an abelian group satisfying some identities found by . They were named by , who used them in their solution of the congruence subgroup problem. Definition Suppose that A is a Dedekind domain and q is a non-zero ideal of A. The set Wq is defined to be the set of pairs (a, b) with a = 1 mod q, b = 0 mod q, such that a and b generate the unit ideal. A Mennicke symbol on Wq with values in a group C is a function (a, b) → [] from Wq to C such that [] = 1, [] = [][] [] = [] if t is in q, [] = [] if t is in A. There is a universal Mennicke symbol with values in a group Cq such that any Mennicke symbol with values in C can be obtained by composing the universal Mennicke symbol with a unique homomorphism from Cq to C. References Erratum . Errata Group theory Algebraic K-theory
Mennicke symbol
Mathematics
239
6,099
https://en.wikipedia.org/wiki/Carboxylic%20acid
In organic chemistry, a carboxylic acid is an organic acid that contains a carboxyl group () attached to an R-group. The general formula of a carboxylic acid is often written as or , sometimes as with R referring to an organyl group (e.g., alkyl, alkenyl, aryl), or hydrogen, or other groups. Carboxylic acids occur widely. Important examples include the amino acids and fatty acids. Deprotonation of a carboxylic acid gives a carboxylate anion. Examples and nomenclature Carboxylic acids are commonly identified by their trivial names. They often have the suffix -ic acid. IUPAC-recommended names also exist; in this system, carboxylic acids have an -oic acid suffix. For example, butyric acid () is butanoic acid by IUPAC guidelines. For nomenclature of complex molecules containing a carboxylic acid, the carboxyl can be considered position one of the parent chain even if there are other substituents, such as 3-chloropropanoic acid. Alternately, it can be named as a "carboxy" or "carboxylic acid" substituent on another parent structure, such as 2-carboxyfuran. The carboxylate anion ( or ) of a carboxylic acid is usually named with the suffix -ate, in keeping with the general pattern of -ic acid and -ate for a conjugate acid and its conjugate base, respectively. For example, the conjugate base of acetic acid is acetate. Carbonic acid, which occurs in bicarbonate buffer systems in nature, is not generally classed as one of the carboxylic acids, despite that it has a moiety that looks like a COOH group. Physical properties Solubility Carboxylic acids are polar. Because they are both hydrogen-bond acceptors (the carbonyl ) and hydrogen-bond donors (the hydroxyl ), they also participate in hydrogen bonding. Together, the hydroxyl and carbonyl group form the functional group carboxyl. Carboxylic acids usually exist as dimers in nonpolar media due to their tendency to "self-associate". Smaller carboxylic acids (1 to 5 carbons) are soluble in water, whereas bigger carboxylic acids have limited solubility due to the increasing hydrophobic nature of the alkyl chain. These longer chain acids tend to be soluble in less-polar solvents such as ethers and alcohols. Aqueous sodium hydroxide and carboxylic acids, even hydrophobic ones, react to yield water-soluble sodium salts. For example, enanthic acid has a low solubility in water (0.2 g/L), but its sodium salt is very soluble in water. Boiling points Carboxylic acids tend to have higher boiling points than water, because of their greater surface areas and their tendency to form stabilized dimers through hydrogen bonds. For boiling to occur, either the dimer bonds must be broken or the entire dimer arrangement must be vaporized, increasing the enthalpy of vaporization requirements significantly. Acidity Carboxylic acids are Brønsted–Lowry acids because they are proton (H+) donors. They are the most common type of organic acid. Carboxylic acids are typically weak acids, meaning that they only partially dissociate into cations and anions in neutral aqueous solution. For example, at room temperature, in a 1-molar solution of acetic acid, only 0.001% of the acid are dissociated (i.e. 10−5 moles out of 1 mol). Electron-withdrawing substituents, such as -CF3 group, give stronger acids (the pKa of acetic acid is 4.76 whereas trifluoroacetic acid, with a trifluoromethyl substituent, has a pKa of 0.23). Electron-donating substituents give weaker acids (the pKa of formic acid is 3.75 whereas acetic acid, with a methyl substituent, has a pKa of 4.76) Deprotonation of carboxylic acids gives carboxylate anions; these are resonance stabilized, because the negative charge is delocalized over the two oxygen atoms, increasing the stability of the anion. Each of the carbon–oxygen bonds in the carboxylate anion has a partial double-bond character. The carbonyl carbon's partial positive charge is also weakened by the -1/2 negative charges on the 2 oxygen atoms. Odour Carboxylic acids often have strong sour odours. Esters of carboxylic acids tend to have fruity, pleasant odours, and many are used in perfume. Characterization Carboxylic acids are readily identified as such by infrared spectroscopy. They exhibit a sharp band associated with vibration of the C=O carbonyl bond (νC=O) between 1680 and 1725 cm−1. A characteristic νO–H band appears as a broad peak in the 2500 to 3000 cm−1 region. By 1H NMR spectrometry, the hydroxyl hydrogen appears in the 10–13 ppm region, although it is often either broadened or not observed owing to exchange with traces of water. Occurrence and applications Many carboxylic acids are produced industrially on a large scale. They are also frequently found in nature. Esters of fatty acids are the main components of lipids and polyamides of aminocarboxylic acids are the main components of proteins. Carboxylic acids are used in the production of polymers, pharmaceuticals, solvents, and food additives. Industrially important carboxylic acids include acetic acid (component of vinegar, precursor to solvents and coatings), acrylic and methacrylic acids (precursors to polymers, adhesives), adipic acid (polymers), citric acid (a flavor and preservative in food and beverages), ethylenediaminetetraacetic acid (chelating agent), fatty acids (coatings), maleic acid (polymers), propionic acid (food preservative), terephthalic acid (polymers). Important carboxylate salts are soaps. Synthesis Industrial routes In general, industrial routes to carboxylic acids differ from those used on a smaller scale because they require specialized equipment. Carbonylation of alcohols as illustrated by the Cativa process for the production of acetic acid. Formic acid is prepared by a different carbonylation pathway, also starting from methanol. Oxidation of aldehydes with air using cobalt and manganese catalysts. The required aldehydes are readily obtained from alkenes by hydroformylation. Oxidation of hydrocarbons using air. For simple alkanes, this method is inexpensive but not selective enough to be useful. Allylic and benzylic compounds undergo more selective oxidations. Alkyl groups on a benzene ring are oxidized to the carboxylic acid, regardless of its chain length. Benzoic acid from toluene, terephthalic acid from para-xylene, and phthalic acid from ortho-xylene are illustrative large-scale conversions. Acrylic acid is generated from propene. Oxidation of ethene using silicotungstic acid catalyst. Base-catalyzed dehydrogenation of alcohols. Carbonylation coupled to the addition of water. This method is effective and versatile for alkenes that generate secondary and tertiary carbocations, e.g. isobutylene to pivalic acid. In the Koch reaction, the addition of water and carbon monoxide to alkenes or alkynes is catalyzed by strong acids. Hydrocarboxylations involve the simultaneous addition of water and CO. Such reactions are sometimes called "Reppe chemistry." Hydrolysis of triglycerides obtained from plant or animal oils. These methods of synthesizing some long-chain carboxylic acids are related to soap making. Fermentation of ethanol. This method is used in the production of vinegar. The Kolbe–Schmitt reaction provides a route to salicylic acid, precursor to aspirin. Laboratory methods Preparative methods for small scale reactions for research or for production of fine chemicals often employ expensive consumable reagents. Oxidation of primary alcohols or aldehydes with strong oxidants such as potassium dichromate, Jones reagent, potassium permanganate, or sodium chlorite. The method is more suitable for laboratory conditions than the industrial use of air, which is "greener" because it yields less inorganic side products such as chromium or manganese oxides. Oxidative cleavage of olefins by ozonolysis, potassium permanganate, or potassium dichromate. Hydrolysis of nitriles, esters, or amides, usually with acid- or base-catalysis. Carbonation of a Grignard reagent and organolithium reagents: Halogenation followed by hydrolysis of methyl ketones in the haloform reaction Base-catalyzed cleavage of non-enolizable ketones, especially aryl ketones: Less-common reactions Many reactions produce carboxylic acids but are used only in specific cases or are mainly of academic interest. Disproportionation of an aldehyde in the Cannizzaro reaction Rearrangement of diketones in the benzilic acid rearrangement Involving the generation of benzoic acids are the von Richter reaction from nitrobenzenes and the Kolbe–Schmitt reaction from phenols. Reactions Acid-base reactions Carboxylic acids react with bases to form carboxylate salts, in which the hydrogen of the hydroxyl (–OH) group is replaced with a metal cation. For example, acetic acid found in vinegar reacts with sodium bicarbonate (baking soda) to form sodium acetate, carbon dioxide, and water: Conversion to esters, amides, anhydrides Widely practiced reactions convert carboxylic acids into esters, amides, carboxylate salts, acid chlorides, and alcohols. Their conversion to esters is widely used, e.g. in the production of polyesters. Likewise, carboxylic acids are converted into amides, but this conversion typically does not occur by direct reaction of the carboxylic acid and the amine. Instead esters are typical precursors to amides. The conversion of amino acids into peptides is a significant biochemical process that requires ATP. Converting a carboxylic acid to an amide is possible, but not straightforward. Instead of acting as a nucleophile, an amine will react as a base in the presence of a carboxylic acid to give the ammonium carboxylate salt. Heating the salt to above 100 °C will drive off water and lead to the formation of the amide. This method of synthesizing amides is industrially important, and has laboratory applications as well. In the presence of a strong acid catalyst, carboxylic acids can condense to form acid anhydrides. The condensation produces water, however, which can hydrolyze the anhydride back to the starting carboxylic acids. Thus, the formation of the anhydride via condensation is an equilibrium process. Under acid-catalyzed conditions, carboxylic acids will react with alcohols to form esters via the Fischer esterification reaction, which is also an equilibrium process. Alternatively, diazomethane can be used to convert an acid to an ester. While esterification reactions with diazomethane often give quantitative yields, diazomethane is only useful for forming methyl esters. Reduction Like esters, most carboxylic acids can be reduced to alcohols by hydrogenation, or using hydride transferring agents such as lithium aluminium hydride. Strong alkyl transferring agents, such as organolithium compounds but not Grignard reagents, will reduce carboxylic acids to ketones along with transfer of the alkyl group. The Vilsmaier reagent (N,N-Dimethyl(chloromethylene)ammonium chloride; ) is a highly chemoselective agent for carboxylic acid reduction. It selectively activates the carboxylic acid to give the carboxymethyleneammonium salt, which can be reduced by a mild reductant like lithium tris(t-butoxy)aluminum hydride to afford an aldehyde in a one pot procedure. This procedure is known to tolerate reactive carbonyl functionalities such as ketone as well as moderately reactive ester, olefin, nitrile, and halide moieties. Conversion to acyl halides The hydroxyl group on carboxylic acids may be replaced with a chlorine atom using thionyl chloride to give acyl chlorides. In nature, carboxylic acids are converted to thioesters. Thionyl chloride can be used to convert carboxylic acids to their corresponding acyl chlorides. First, carboxylic acid 1 attacks thionyl chloride, and chloride ion leaves. The resulting oxonium ion 2 is activated towards nucleophilic attack and has a good leaving group, setting it apart from a normal carboxylic acid. In the next step, 2 is attacked by chloride ion to give tetrahedral intermediate 3, a chlorosulfite. The tetrahedral intermediate collapses with the loss of sulfur dioxide and chloride ion, giving protonated acyl chloride 4. Chloride ion can remove the proton on the carbonyl group, giving the acyl chloride 5 with a loss of HCl. Phosphorus(III) chloride (PCl3) and phosphorus(V) chloride (PCl5) will also convert carboxylic acids to acid chlorides, by a similar mechanism. One equivalent of PCl3 can react with three equivalents of acid, producing one equivalent of H3PO3, or phosphorus acid, in addition to the desired acid chloride. PCl5 reacts with carboxylic acids in a 1:1 ratio, and produces phosphorus(V) oxychloride (POCl3) and hydrogen chloride (HCl) as byproducts. Reactions with carbanion equivalents Carboxylic acids react with Grignard reagents and organolithiums to form ketones. The first equivalent of nucleophile acts as a base and deprotonates the acid. A second equivalent will attack the carbonyl group to create a geminal alkoxide dianion, which is protonated upon workup to give the hydrate of a ketone. Because most ketone hydrates are unstable relative to their corresponding ketones, the equilibrium between the two is shifted heavily in favor of the ketone. For example, the equilibrium constant for the formation of acetone hydrate from acetone is only 0.002. The carboxylic group is the most acidic in organic compounds. Specialized reactions As with all carbonyl compounds, the protons on the α-carbon are labile due to keto–enol tautomerization. Thus, the α-carbon is easily halogenated in the Hell–Volhard–Zelinsky halogenation. The Schmidt reaction converts carboxylic acids to amines. Carboxylic acids are decarboxylated in the Hunsdiecker reaction. The Dakin–West reaction converts an amino acid to the corresponding amino ketone. In the Barbier–Wieland degradation, a carboxylic acid on an aliphatic chain having a simple methylene bridge at the alpha position can have the chain shortened by one carbon. The inverse procedure is the Arndt–Eistert synthesis, where an acid is converted into acyl halide, which is then reacted with diazomethane to give one additional methylene in the aliphatic chain. Many acids undergo oxidative decarboxylation. Enzymes that catalyze these reactions are known as carboxylases (EC 6.4.1) and decarboxylases (EC 4.1.1). Carboxylic acids are reduced to aldehydes via the ester and DIBAL, via the acid chloride in the Rosenmund reduction and via the thioester in the Fukuyama reduction. In ketonic decarboxylation carboxylic acids are converted to ketones. Organolithium reagents (>2 equiv) react with carboxylic acids to give a dilithium 1,1-diolate, a stable tetrahedral intermediate which decomposes to give a ketone upon acidic workup. The Kolbe electrolysis is an electrolytic, decarboxylative dimerization reaction. It gets rid of the carboxyl groups of two acid molecules, and joins the remaining fragments together. Carboxyl radical The carboxyl radical, •COOH, only exists briefly. The acid dissociation constant of •COOH has been measured using electron paramagnetic resonance spectroscopy. The carboxyl group tends to dimerise to form oxalic acid. See also Acid anhydride Acid chloride Amide Amino acid Ester List of carboxylic acids Dicarboxylic acid Pseudoacid Thiocarboxy Carbon dioxide (CO2) References External links Carboxylic acids pH and titration – freeware for calculations, data analysis, simulation, and distribution diagram generation PHC. Functional groups
Carboxylic acid
Chemistry
3,778
44,609,091
https://en.wikipedia.org/wiki/Tylopilus%20funerarius
Tylopilus funerarius is a bolete fungus in the family Boletaceae. Found in Singapore, it was described as new to science in 1909 by English mycologist George Edward Massee. He described it as a "sombre, uninviting species, characterised by brownish-black velvety pileus and brown tube and pores", and considered it similar in appearance to Boletus chrysenteron (now Xerocomellus chrysenteron). The species was transferred to the genus Tylopilus in 1981. References External links funerarius Fungi described in 1909 Fungi of Asia Fungus species
Tylopilus funerarius
Biology
132
42,628,154
https://en.wikipedia.org/wiki/Master%20stability%20function
In mathematics, the master stability function is a tool used to analyze the stability of the synchronous state in a dynamical system consisting of many identical systems which are coupled together, such as the Kuramoto model. The setting is as follows. Consider a system with identical oscillators. Without the coupling, they evolve according to the same differential equation, say where denotes the state of oscillator . A synchronous state of the system of oscillators is where all the oscillators are in the same state. The coupling is defined by a coupling strength , a matrix which describes how the oscillators are coupled together, and a function of the state of a single oscillator. Including the coupling leads to the following equation: It is assumed that the row sums vanish so that the manifold of synchronous states is neutrally stable. The master stability function is now defined as the function which maps the complex number to the greatest Lyapunov exponent of the equation The synchronous state of the system of coupled oscillators is stable if the master stability function is negative at where ranges over the eigenvalues of the coupling matrix . References . . Dynamical systems
Master stability function
Physics,Mathematics
255
1,352,555
https://en.wikipedia.org/wiki/Wheel%20and%20axle
The wheel and axle is a simple machine, consisting of a wheel attached to a smaller axle so that these two parts rotate together, in which a force is transferred from one to the other. The wheel and axle can be viewed as a version of the Lever, with a drive force applied tangentially to the perimeter of the wheel, and a load force applied to the axle supported in a bearing, which serves as a fulcrum. History The Halaf culture of 6500–5100 BCE has been credited with the earliest depiction of a wheeled vehicle, but this is doubtful as there is no evidence of Halafians using either wheeled vehicles or even pottery wheels. One of the first applications of the wheel to appear was the potter's wheel, used by prehistoric cultures to fabricate clay pots. The earliest type, known as "tournettes" or "slow wheels", were known in the Middle East by the 5th millennium BCE. One of the earliest examples was discovered at Tepe Pardis, Iran, and dated to 5200–4700 BCE. These were made of stone or clay and secured to the ground with a peg in the center, but required significant effort to turn. True potter's wheels, which are freely-spinning and have a wheel and axle mechanism, were developed in Mesopotamia (Iraq) by 4200–4000 BCE. The oldest surviving example, which was found in Ur (modern day Iraq), dates to approximately 3100 BCE. Evidence of wheeled vehicles appeared by the late 4th millennium BCE. Depictions of wheeled wagons found on clay tablet pictographs at the Eanna district of Uruk, in the Sumerian civilization of Mesopotamia, are dated between 3700–3500 BCE. In the second half of the 4th millennium BCE, evidence of wheeled vehicles appeared near-simultaneously in the Northern Caucasus (Maykop culture) and Eastern Europe (Cucuteni–Trypillian culture). Depictions of a wheeled vehicle appeared between 3500 and 3350 BCE in the Bronocice clay pot excavated in a Funnelbeaker culture settlement in southern Poland. In nearby Olszanica, a 2.2 m wide door was constructed for wagon entry; this barn was 40 m long and had 3 doors. Surviving evidence of a wheel–axle combination, from Stare Gmajne near Ljubljana in Slovenia (Ljubljana Marshes Wooden Wheel), is dated within two standard deviations to 3340–3030 BCE, the axle to 3360–3045 BCE. Two types of early Neolithic European wheel and axle are known; a circumalpine type of wagon construction (the wheel and axle rotate together, as in Ljubljana Marshes Wheel), and that of the Baden culture in Hungary (axle does not rotate). They both are dated to c. 3200–3000 BCE. Historians believe that there was a diffusion of the wheeled vehicle from the Near East to Europe around the mid-4th millennium BCE. An early example of a wooden wheel and its axle was found in 2002 at the Ljubljana Marshes some 20 km south of Ljubljana, the capital of Slovenia. According to radiocarbon dating, it is between 5,100 and 5,350 years old. The wheel was made of ash and oak and had a radius of 70 cm and the axle was 120 cm long and made of oak. In China, the earliest evidence of spoked wheels comes from Qinghai in the form of two wheel hubs from a site dated between 2000 and 1500 BCE. In Roman Egypt, Hero of Alexandria identified the wheel and axle as one of the simple machines used to lift weights. This is thought to have been in the form of the windlass which consists of a crank or pulley connected to a cylindrical barrel that provides mechanical advantage to wind up a rope and lift a load such as a bucket from the well. The wheel and axle was identified as one of six simple machines by Renaissance scientists, drawing from Greek texts on technology. Mechanical advantage The simple machine called a wheel and axle refers to the assembly formed by two disks, or cylinders, of different diameters mounted so they rotate together around the same axis. The thin rod which needs to be turned is called the axle and the wider object fixed to the axle, on which we apply force is called the wheel. A tangential force applied to the periphery of the large disk can exert a larger force on a load attached to the axle, achieving mechanical advantage. When used as the wheel of a wheeled vehicle the smaller cylinder is the axle of the wheel, but when used in a windlass, winch, and other similar applications (see medieval mining lift to right) the smaller cylinder may be separate from the axle mounted in the bearings. It cannot be used separately. Assuming the wheel and axle does not dissipate or store energy, that is it has no friction or elasticity, the power input by the force applied to the wheel must equal the power output at the axle. As the wheel and axle system rotates around its bearings, points on the circumference, or edge, of the wheel move faster than points on the circumference, or edge, of the axle. Therefore, a force applied to the edge of the wheel must be less than the force applied to the edge of the axle, because power is the product of force and velocity. Let a and b be the distances from the center of the bearing to the edges of the wheel A and the axle B. If the input force FA is applied to the edge of the wheel A and the force FB at the edge of the axle B is the output, then the ratio of the velocities of points A and B is given by a/b, so the ratio of the output force to the input force, or mechanical advantage, is given by The mechanical advantage of a simple machine like the wheel and axle is computed as the ratio of the resistance to the effort. The larger the ratio the greater the multiplication of force (torque) created or distance achieved. By varying the radii of the axle and/or wheel, any amount of mechanical advantage may be gained. In this manner, the size of the wheel may be increased to an inconvenient extent. In this case a system or combination of wheels (often toothed, that is, gears) are used. As a wheel and axle is a type of lever, a system of wheels and axles is like a compound lever. On a powered wheeled vehicle the transmission exerts a force on the axle which has a smaller radius than the wheel. The mechanical advantage is therefore much less than 1. The wheel and axle of a car are therefore not representative of a simple machine (whose purpose is to increase the force). The friction between wheel and road is actually quite low, so even a small force exerted on the axle is sufficient. The actual advantage lies in the large rotational speed at which the axle is rotating thanks to the transmission. Ideal mechanical advantage The mechanical advantage of a wheel and axle with no friction is called the ideal mechanical advantage (IMA). It is calculated with the following formula: Actual mechanical advantage All actual wheels have friction, which dissipates some of the power as heat. The actual mechanical advantage (AMA) of a wheel and axle is calculated with the following formula: where is the efficiency of the wheel, the ratio of power output to power input References Additional resources Basic Machines and How They Work, United States. Bureau of Naval Personnel, Courier Dover Publications 1965, pp. 3–1 and following preview online Historic machinery Simple machines Vehicle technology
Wheel and axle
Physics,Technology,Engineering
1,534
15,310,422
https://en.wikipedia.org/wiki/Christian%20Haass
Christian Haass (born 19 December 1960 in Mannheim, Germany) is a German biochemist who specializes in metabolic biochemistry and neuroscience. Haass studied biology in Heidelberg from 1981 to 1985. From 1990 on he was a postdoctoral fellow in the laboratory of Dennis Selkoe at Harvard Medical School, where he worked from 1993 to 1995 as an assistant professor. Afterwards he returned to Germany as professor of molecular biology at the Central Institute of Mental Health, Mannheim. In 1999 he was offered a chair in the medical faculty at the Ludwig Maximilian University of Munich. The emphasis of his work is in the molecular biology and cell biology of Alzheimer's disease and Parkinson's disease. He received Hamdan Award for Medical Research Excellence - Cytokines in Pathogenesis & Therapy of Diseases from Hamdan Medical Award in 2006. Among other awards, he has won the Leibniz Prize and the Metlife Foundation Award for Medical Research in Alzheimer's Disease. References Sources Adolf Butenandt Institute - Ludwig Maximilian University of Munich Laboratory for Neurodegenerative Disease Research Collaborative Research Center 596 - Molecular Mechanisms of Neurodegeneration Publications German biochemists 1960 births Living people Heidelberg University alumni Harvard Medical School people Academic staff of the Ludwig Maximilian University of Munich Gottfried Wilhelm Leibniz Prize winners Members of Academia Europaea Members of the German National Academy of Sciences Leopoldina
Christian Haass
Chemistry
273
38,900,401
https://en.wikipedia.org/wiki/Tricholoma%20odorum
Tricholoma odorum is a mushroom of the agaric genus Tricholoma. It was formally described in 1898 by American mycologist Charles Horton Peck. It is considered inedible. See also List of North American Tricholoma References External links Fungi described in 1898 Fungi of North America odorum Taxa named by Charles Horton Peck Fungus species
Tricholoma odorum
Biology
75
63,732,867
https://en.wikipedia.org/wiki/Andrea%20Cavalleri
Andrea Cavalleri (born 1969) is an Italian physicist who specializes in optical science and in condensed matter physics. He is the founding director of the in Hamburg, Germany and a professor of Physics at the University of Oxford. He was awarded the 2018 Frank Isakson Prize for his pioneering work on ultrafast optical spectroscopy applied to condensed matter systems. Scientific achievements Cavalleri is known for his application of light to create new states of matter, and especially for the use of terahertz and mid-infrared optical pulses to sculpt new crystal structures. The field of research pioneered by Cavalleri is sometimes referred to as non-linear phononics. He has shown that phononic control can be used to create new crystal structures with light, to induce hidden metallic states in oxides, induce ferroelectricity in dielectrics, manipulate magnetism and create non-equilibrium superconductivity at very high temperatures. Cavalleri has also been amongst the people who applied the first femtosecond X-ray pulses to condensed matter systems, for example in his studies of photo-induced phase transitions. Scientific career He received his laurea degree and PhD at the University of Pavia, as a student of Almo Collegio Borromeo, in 1994 and 1998 respectively. Cavalleri has held research positions at the University of Essen, the University of California, San Diego and the Lawrence Berkeley National Laboratory. In 2005, he joined the faculty of the University of Oxford, where he was promoted to Professor of Physics in 2006. He joined the Max Planck Society in 2008. Selected honors and awards He is a recipient of the 2004 European Science Foundation Young Investigator Award, of the 2015 Max Born Medal from the Institute of Physics (UK) and the Deutsche Physikalische Gesellschaft and of the 2015 Dannie Heineman Prize from the Göttingen Academy of Sciences. The American Physical Society awarded Cavalleri the 2018 Frank Isakson Prize for Optical Effects in Solids "for pioneering contributions to the development and application of ultra-fast optical spectroscopy to condensed matter systems, and providing insight into lattice dynamics, structural phase transitions, and the non-equilibrium control of solids". He is an elected fellow of the American Physical Society, the American Association for the Advancement of Science and the Institute of Physics (UK). He was elected a member of the Academia Europaea in 2017 and a fellow of the European Academy of Sciences in 2018. References External links Research homepage 21st-century Italian physicists 1969 births Living people Academics of the University of Oxford University of Pavia alumni Italian expatriates in the United States Italian expatriates in the United Kingdom Italian expatriates in Germany Condensed matter physicists Academic staff of the University of Hamburg Max Planck Society people Max Planck Institute directors Fellows of the American Physical Society Members of Academia Europaea
Andrea Cavalleri
Physics,Materials_science
587
70,769,267
https://en.wikipedia.org/wiki/Coprinopsis%20pseudoradiata
Coprinopsis pseudoradiata is a species of coprophilous fungus in the family Psathyrellaceae. It grows on the dung of sheep. See also List of Coprinopsis species References Fungi described in 2001 Fungi of Europe pseudoradiata Fungus species
Coprinopsis pseudoradiata
Biology
56
23,579
https://en.wikipedia.org/wiki/Photoelectric%20effect
The photoelectric effect is the emission of electrons from a material caused by electromagnetic radiation such as ultraviolet light. Electrons emitted in this manner are called photoelectrons. The phenomenon is studied in condensed matter physics, solid state, and quantum chemistry to draw inferences about the properties of atoms, molecules and solids. The effect has found use in electronic devices specialized for light detection and precisely timed electron emission. The experimental results disagree with classical electromagnetism, which predicts that continuous light waves transfer energy to electrons, which would then be emitted when they accumulate enough energy. An alteration in the intensity of light would theoretically change the kinetic energy of the emitted electrons, with sufficiently dim light resulting in a delayed emission. The experimental results instead show that electrons are dislodged only when the light exceeds a certain frequency—regardless of the light's intensity or duration of exposure. Because a low-frequency beam at a high intensity does not build up the energy required to produce photoelectrons, as would be the case if light's energy accumulated over time from a continuous wave, Albert Einstein proposed that a beam of light is not a wave propagating through space, but a swarm of discrete energy packets, known as photons—term coined by Gilbert N. Lewis in 1926. Emission of conduction electrons from typical metals requires a few electron-volt (eV) light quanta, corresponding to short-wavelength visible or ultraviolet light. In extreme cases, emissions are induced with photons approaching zero energy, like in systems with negative electron affinity and the emission from excited states, or a few hundred keV photons for core electrons in elements with a high atomic number. Study of the photoelectric effect led to important steps in understanding the quantum nature of light and electrons and influenced the formation of the concept of wave–particle duality. Other phenomena where light affects the movement of electric charges include the photoconductive effect, the photovoltaic effect, and the photoelectrochemical effect. Emission mechanism The photons of a light beam have a characteristic energy, called photon energy, which is proportional to the frequency of the light. In the photoemission process, when an electron within some material absorbs the energy of a photon and acquires more energy than its binding energy, it is likely to be ejected. If the photon energy is too low, the electron is unable to escape the material. Since an increase in the intensity of low-frequency light will only increase the number of low-energy photons, this change in intensity will not create any single photon with enough energy to dislodge an electron. Moreover, the energy of the emitted electrons will not depend on the intensity of the incoming light of a given frequency, but only on the energy of the individual photons. While free electrons can absorb any energy when irradiated as long as this is followed by an immediate re-emission, like in the Compton effect, in quantum systems all of the energy from one photon is absorbed—if the process is allowed by quantum mechanics—or none at all. Part of the acquired energy is used to liberate the electron from its atomic binding, and the rest contributes to the electron's kinetic energy as a free particle. Because electrons in a material occupy many different quantum states with different binding energies, and because they can sustain energy losses on their way out of the material, the emitted electrons will have a range of kinetic energies. The electrons from the highest occupied states will have the highest kinetic energy. In metals, those electrons will be emitted from the Fermi level. When the photoelectron is emitted into a solid rather than into a vacuum, the term internal photoemission is often used, and emission into a vacuum is distinguished as external photoemission. Experimental observation of photoelectric emission Even though photoemission can occur from any material, it is most readily observed from metals and other conductors. This is because the process produces a charge imbalance which, if not neutralized by current flow, results in the increasing potential barrier until the emission completely ceases. The energy barrier to photoemission is usually increased by nonconductive oxide layers on metal surfaces, so most practical experiments and devices based on the photoelectric effect use clean metal surfaces in evacuated tubes. Vacuum also helps observing the electrons since it prevents gases from impeding their flow between the electrodes. Sunlight is an inconsistent and variable source of ultraviolet light. Cloud cover, ozone concentration, altitude, and surface reflection all alter the amount of UV. Laboratory sources of UV are based on xenon arc lamps or, for more uniform but weaker light, fluorescent lamps. More specialized sources include ultraviolet lasers and synchrotron radiation. The classical setup to observe the photoelectric effect includes a light source, a set of filters to monochromatize the light, a vacuum tube transparent to ultraviolet light, an emitting electrode (E) exposed to the light, and a collector (C) whose voltage VC can be externally controlled. A positive external voltage is used to direct the photoemitted electrons onto the collector. If the frequency and the intensity of the incident radiation are fixed, the photoelectric current I increases with an increase in the positive voltage, as more and more electrons are directed onto the electrode. When no additional photoelectrons can be collected, the photoelectric current attains a saturation value. This current can only increase with the increase of the intensity of light. An increasing negative voltage prevents all but the highest-energy electrons from reaching the collector. When no current is observed through the tube, the negative voltage has reached the value that is high enough to slow down and stop the most energetic photoelectrons of kinetic energy Kmax. This value of the retarding voltage is called the stopping potential or cut off potential Vo. Since the work done by the retarding potential in stopping the electron of charge e is eVo, the following must hold eVo = Kmax. The current-voltage curve is sigmoidal, but its exact shape depends on the experimental geometry and the electrode material properties. For a given metal surface, there exists a certain minimum frequency of incident radiation below which no photoelectrons are emitted. This frequency is called the threshold frequency. Increasing the frequency of the incident beam increases the maximum kinetic energy of the emitted photoelectrons, and the stopping voltage has to increase. The number of emitted electrons may also change because the probability that each photon results in an emitted electron is a function of photon energy. An increase in the intensity of the same monochromatic light (so long as the intensity is not too high), which is proportional to the number of photons impinging on the surface in a given time, increases the rate at which electrons are ejected—the photoelectric current I—but the kinetic energy of the photoelectrons and the stopping voltage remain the same. For a given metal and frequency of incident radiation, the rate at which photoelectrons are ejected is directly proportional to the intensity of the incident light. The time lag between the incidence of radiation and the emission of a photoelectron is very small, less than 10−9 second. Angular distribution of the photoelectrons is highly dependent on polarization (the direction of the electric field) of the incident light, as well as the emitting material's quantum properties such as atomic and molecular orbital symmetries and the electronic band structure of crystalline solids. In materials without macroscopic order, the distribution of electrons tends to peak in the direction of polarization of linearly polarized light. The experimental technique that can measure these distributions to infer the material's properties is angle-resolved photoemission spectroscopy. Theoretical explanation In 1905, Einstein proposed a theory of the photoelectric effect using a concept that light consists of tiny packets of energy known as photons or light quanta. Each packet carries energy that is proportional to the frequency of the corresponding electromagnetic wave. The proportionality constant has become known as the Planck constant. In the range of kinetic energies of the electrons that are removed from their varying atomic bindings by the absorption of a photon of energy , the highest kinetic energy is Here, is the minimum energy required to remove an electron from the surface of the material. It is called the work function of the surface and is sometimes denoted or . If the work function is written as the formula for the maximum kinetic energy of the ejected electrons becomes Kinetic energy is positive, and is required for the photoelectric effect to occur. The frequency is the threshold frequency for the given material. Above that frequency, the maximum kinetic energy of the photoelectrons as well as the stopping voltage in the experiment rise linearly with the frequency, and have no dependence on the number of photons and the intensity of the impinging monochromatic light. Einstein's formula, however simple, explained all the phenomenology of the photoelectric effect, and had far-reaching consequences in the development of quantum mechanics. Photoemission from atoms, molecules and solids Electrons that are bound in atoms, molecules and solids each occupy distinct states of well-defined binding energies. When light quanta deliver more than this amount of energy to an individual electron, the electron may be emitted into free space with excess (kinetic) energy that is higher than the electron's binding energy. The distribution of kinetic energies thus reflects the distribution of the binding energies of the electrons in the atomic, molecular or crystalline system: an electron emitted from the state at binding energy is found at kinetic energy . This distribution is one of the main characteristics of the quantum system, and can be used for further studies in quantum chemistry and quantum physics. Models of photoemission from solids The electronic properties of ordered, crystalline solids are determined by the distribution of the electronic states with respect to energy and momentum—the electronic band structure of the solid. Theoretical models of photoemission from solids show that this distribution is, for the most part, preserved in the photoelectric effect. The phenomenological three-step model for ultraviolet and soft X-ray excitation decomposes the effect into these steps: Inner photoelectric effect in the bulk of the material that is a direct optical transition between an occupied and an unoccupied electronic state. This effect is subject to quantum-mechanical selection rules for dipole transitions. The hole left behind the electron can give rise to secondary electron emission, or the so-called Auger effect, which may be visible even when the primary photoelectron does not leave the material. In molecular solids phonons are excited in this step and may be visible as satellite lines in the final electron energy. Electron propagation to the surface in which some electrons may be scattered because of interactions with other constituents of the solid. Electrons that originate deeper in the solid are much more likely to suffer collisions and emerge with altered energy and momentum. Their mean-free path is a universal curve dependent on electron's energy. Electron escape through the surface barrier into free-electron-like states of the vacuum. In this step the electron loses energy in the amount of the work function of the surface, and suffers from the momentum loss in the direction perpendicular to the surface. Because the binding energy of electrons in solids is conveniently expressed with respect to the highest occupied state at the Fermi energy , and the difference to the free-space (vacuum) energy is the work function of the surface, the kinetic energy of the electrons emitted from solids is usually written as . There are cases where the three-step model fails to explain peculiarities of the photoelectron intensity distributions. The more elaborate one-step model treats the effect as a coherent process of photoexcitation into the final state of a finite crystal for which the wave function is free-electron-like outside of the crystal, but has a decaying envelope inside. History 19th century In 1839, Alexandre Edmond Becquerel discovered the related photovoltaic effect while studying the effect of light on electrolytic cells. Though not equivalent to the photoelectric effect, his work on photovoltaics was instrumental in showing a strong relationship between light and electronic properties of materials. In 1873, Willoughby Smith discovered photoconductivity in selenium while testing the metal for its high resistance properties in conjunction with his work involving submarine telegraph cables. Johann Elster (1854–1920) and Hans Geitel (1855–1923), students in Heidelberg, investigated the effects produced by light on electrified bodies and developed the first practical photoelectric cells that could be used to measure the intensity of light. They arranged metals with respect to their power of discharging negative electricity: rubidium, potassium, alloy of potassium and sodium, sodium, lithium, magnesium, thallium and zinc; for copper, platinum, lead, iron, cadmium, carbon, and mercury the effects with ordinary light were too small to be measurable. The order of the metals for this effect was the same as in Volta's series for contact-electricity, the most electropositive metals giving the largest photo-electric effect. In 1887, Heinrich Hertz observed the photoelectric effect and reported on the production and reception of electromagnetic waves. The receiver in his apparatus consisted of a coil with a spark gap, where a spark would be seen upon detection of electromagnetic waves. He placed the apparatus in a darkened box to see the spark better. However, he noticed that the maximum spark length was reduced when inside the box. A glass panel placed between the source of electromagnetic waves and the receiver absorbed ultraviolet radiation that assisted the electrons in jumping across the gap. When removed, the spark length would increase. He observed no decrease in spark length when he replaced the glass with quartz, as quartz does not absorb UV radiation. The discoveries by Hertz led to a series of investigations by Wilhelm Hallwachs, Hoor, Augusto Righi and Aleksander Stoletov on the effect of light, and especially of ultraviolet light, on charged bodies. Hallwachs connected a zinc plate to an electroscope. He allowed ultraviolet light to fall on a freshly cleaned zinc plate and observed that the zinc plate became uncharged if initially negatively charged, positively charged if initially uncharged, and more positively charged if initially positively charged. From these observations he concluded that some negatively charged particles were emitted by the zinc plate when exposed to ultraviolet light. With regard to the Hertz effect, the researchers from the start showed the complexity of the phenomenon of photoelectric fatigue—the progressive diminution of the effect observed upon fresh metallic surfaces. According to Hallwachs, ozone played an important part in the phenomenon, and the emission was influenced by oxidation, humidity, and the degree of polishing of the surface. It was at the time unclear whether fatigue is absent in a vacuum. In the period from 1888 until 1891, a detailed analysis of the photoeffect was performed by Aleksandr Stoletov with results reported in six publications. Stoletov invented a new experimental setup which was more suitable for a quantitative analysis of the photoeffect. He discovered a direct proportionality between the intensity of light and the induced photoelectric current (the first law of photoeffect or Stoletov's law). He measured the dependence of the intensity of the photo electric current on the gas pressure, where he found the existence of an optimal gas pressure corresponding to a maximum photocurrent; this property was used for the creation of solar cells. Many substances besides metals discharge negative electricity under the action of ultraviolet light. G. C. Schmidt and O. Knoblauch compiled a list of these substances. In 1897, J. J. Thomson investigated ultraviolet light in Crookes tubes. Thomson deduced that the ejected particles, which he called corpuscles, were of the same nature as cathode rays. These particles later became known as the electrons. Thomson enclosed a metal plate (a cathode) in a vacuum tube, and exposed it to high-frequency radiation. It was thought that the oscillating electromagnetic fields caused the atoms' field to resonate and, after reaching a certain amplitude, caused subatomic corpuscles to be emitted, and current to be detected. The amount of this current varied with the intensity and color of the radiation. Larger radiation intensity or frequency would produce more current. During the years 1886–1902, Wilhelm Hallwachs and Philipp Lenard investigated the phenomenon of photoelectric emission in detail. Lenard observed that a current flows through an evacuated glass tube enclosing two electrodes when ultraviolet radiation falls on one of them. As soon as ultraviolet radiation is stopped, the current also stops. This initiated the concept of photoelectric emission. The discovery of the ionization of gases by ultraviolet light was made by Philipp Lenard in 1900. As the effect was produced across several centimeters of air and yielded a greater number of positive ions than negative, it was natural to interpret the phenomenon, as J. J. Thomson did, as a Hertz effect upon the particles present in the gas. 20th century In 1902, Lenard observed that the energy of individual emitted electrons was independent of the applied light intensity. This appeared to be at odds with Maxwell's wave theory of light, which predicted that the electron energy would be proportional to the intensity of the radiation. Lenard observed the variation in electron energy with light frequency using a powerful electric arc lamp which enabled him to investigate large changes in intensity. However, Lenard's results were qualitative rather than quantitative because of the difficulty in performing the experiments: the experiments needed to be done on freshly cut metal so that the pure metal was observed, but it oxidized in a matter of minutes even in the partial vacuums he used. The current emitted by the surface was determined by the light's intensity, or brightness: doubling the intensity of the light doubled the number of electrons emitted from the surface. Initial investigation of the photoelectric effect in gasses by Lenard were followed up by J. J. Thomson and then more decisively by Frederic Palmer Jr. The gas photoemission was studied and showed very different characteristics than those at first attributed to it by Lenard. In 1900, while studying black-body radiation, the German physicist Max Planck suggested in his "On the Law of Distribution of Energy in the Normal Spectrum" paper that the energy carried by electromagnetic waves could only be released in packets of energy. In 1905, Albert Einstein published a paper advancing the hypothesis that light energy is carried in discrete quantized packets to explain experimental data from the photoelectric effect. Einstein theorized that the energy in each quantum of light was equal to the frequency of light multiplied by a constant, later called the Planck constant. A photon above a threshold frequency has the required energy to eject a single electron, creating the observed effect. This was a step in the development of quantum mechanics. In 1914, Robert A. Millikan's highly accurate measurements of the Planck constant from the photoelectric effect supported Einstein's model, even though a corpuscular theory of light was for Millikan, at the time, "quite unthinkable". Einstein was awarded the 1921 Nobel Prize in Physics for "his discovery of the law of the photoelectric effect", and Millikan was awarded the Nobel Prize in 1923 for "his work on the elementary charge of electricity and on the photoelectric effect". In quantum perturbation theory of atoms and solids acted upon by electromagnetic radiation, the photoelectric effect is still commonly analyzed in terms of waves; the two approaches are equivalent because photon or wave absorption can only happen between quantized energy levels whose energy difference is that of the energy of photon. Albert Einstein's mathematical description of how the photoelectric effect was caused by absorption of quanta of light was in one of his Annus Mirabilis papers, named "On a Heuristic Viewpoint Concerning the Production and Transformation of Light". The paper proposed a simple description of energy quanta, and showed how they explained the blackbody radiation spectrum. His explanation in terms of absorption of discrete quanta of light agreed with experimental results. It explained why the energy of photoelectrons was not dependent on incident light intensity. This was a theoretical leap, but the concept was strongly resisted at first because it contradicted the wave theory of light that followed naturally from James Clerk Maxwell's equations of electromagnetism, and more generally, the assumption of infinite divisibility of energy in physical systems. Einstein's work predicted that the energy of individual ejected electrons increases linearly with the frequency of the light. The precise relationship had not at that time been tested. By 1905 it was known that the energy of photoelectrons increases with increasing frequency of incident light and is independent of the intensity of the light. However, the manner of the increase was not experimentally determined until 1914 when Millikan showed that Einstein's prediction was correct. The photoelectric effect helped to propel the then-emerging concept of wave–particle duality in the nature of light. Light simultaneously possesses the characteristics of both waves and particles, each being manifested according to the circumstances. The effect was impossible to understand in terms of the classical wave description of light, as the energy of the emitted electrons did not depend on the intensity of the incident radiation. Classical theory predicted that the electrons would 'gather up' energy over a period of time, and then be emitted. Uses and effects Photomultipliers These are extremely light-sensitive vacuum tubes with a coated photocathode inside the envelope. The photo cathode contains combinations of materials such as cesium, rubidium, and antimony specially selected to provide a low work function, so when illuminated even by very low levels of light, the photocathode readily releases electrons. By means of a series of electrodes (dynodes) at ever-higher potentials, these electrons are accelerated and substantially increased in number through secondary emission to provide a readily detectable output current. Photomultipliers are still commonly used wherever low levels of light must be detected. Image sensors Video camera tubes in the early days of television used the photoelectric effect. For example, Philo Farnsworth's "Image dissector" used a screen charged by the photoelectric effect to transform an optical image into a scanned electronic signal. Photoelectron spectroscopy Because the kinetic energy of the emitted electrons is exactly the energy of the incident photon minus the energy of the electron's binding within an atom, molecule or solid, the binding energy can be determined by shining a monochromatic X-ray or UV light of a known energy and measuring the kinetic energies of the photoelectrons. The distribution of electron energies is valuable for studying quantum properties of these systems. It can also be used to determine the elemental composition of the samples. For solids, the kinetic energy and emission angle distribution of the photoelectrons is measured for the complete determination of the electronic band structure in terms of the allowed binding energies and momenta of the electrons. Modern instruments for angle-resolved photoemission spectroscopy are capable of measuring these quantities with a precision better than 1 meV and 0.1°. Photoelectron spectroscopy measurements are usually performed in a high-vacuum environment, because the electrons would be scattered by gas molecules if they were present. However, some companies are now selling products that allow photoemission in air. The light source can be a laser, a discharge tube, or a synchrotron radiation source. The concentric hemispherical analyzer is a typical electron energy analyzer. It uses an electric field between two hemispheres to change (disperse) the trajectories of incident electrons depending on their kinetic energies. Night vision devices Photons hitting a thin film of alkali metal or semiconductor material such as gallium arsenide in an image intensifier tube cause the ejection of photoelectrons due to the photoelectric effect. These are accelerated by an electrostatic field where they strike a phosphor coated screen, converting the electrons back into photons. Intensification of the signal is achieved either through acceleration of the electrons or by increasing the number of electrons through secondary emissions, such as with a micro-channel plate. Sometimes a combination of both methods is used. Additional kinetic energy is required to move an electron out of the conduction band and into the vacuum level. This is known as the electron affinity of the photocathode and is another barrier to photoemission other than the forbidden band, explained by the band gap model. Some materials such as gallium arsenide have an effective electron affinity that is below the level of the conduction band. In these materials, electrons that move to the conduction band all have sufficient energy to be emitted from the material, so the film that absorbs photons can be quite thick. These materials are known as negative electron affinity materials. Spacecraft The photoelectric effect will cause spacecraft exposed to sunlight to develop a positive charge. This can be a major problem, as other parts of the spacecraft are in shadow which will result in the spacecraft developing a negative charge from nearby plasmas. The imbalance can discharge through delicate electrical components. The static charge created by the photoelectric effect is self-limiting, because a higher charged object does not give up its electrons as easily as a lower charged object does. Moon dust Light from the Sun hitting lunar dust causes it to become positively charged from the photoelectric effect. The charged dust then repels itself and lifts off the surface of the Moon by electrostatic levitation. This manifests itself almost like an "atmosphere of dust", visible as a thin haze and blurring of distant features, and visible as a dim glow after the sun has set. This was first photographed by the Surveyor program probes in the 1960s, and most recently the Chang'e 3 rover observed dust deposition on lunar rocks as high as about 28 cm. It is thought that the smallest particles are repelled kilometers from the surface and that the particles move in "fountains" as they charge and discharge. Competing processes and photoemission cross section When photon energies are as high as the electron rest energy of , yet another process, Compton scattering, may occur. Above twice this energy, at , pair production is also more likely. Compton scattering and pair production are examples of two other competing mechanisms. Even if the photoelectric effect is the favoured reaction for a particular interaction of a single photon with a bound electron, the result is also subject to quantum statistics and is not guaranteed. The probability of the photoelectric effect occurring is measured by the cross section of the interaction, σ. This has been found to be a function of the atomic number of the target atom and photon energy. In a crude approximation, for photon energies above the highest atomic binding energy, the cross section is given by: Here Z is the atomic number and n is a number which varies between 4 and 5. The photoelectric effect rapidly decreases in significance in the gamma-ray region of the spectrum, with increasing photon energy. It is also more likely from elements with high atomic number. Consequently, high-Z materials make good gamma-ray shields, which is the principal reason why lead (Z = 82) is preferred and most widely used. See also Anomalous photovoltaic effect Compton scattering Dember effect Photo–Dember effect Wave–particle duality Photomagnetic effect Photochemistry Timeline of atomic and subatomic physics References External links Astronomy Cast "http://www.astronomycast.com/2014/02/ep-335-photoelectric-effect/". AstronomyCast. Nave, R., "Wave-Particle Duality". HyperPhysics. "Photoelectric effect". Physics 2000. University of Colorado, Boulder, Colorado. (page not found) ACEPT W3 Group, "The Photoelectric Effect". Department of Physics and Astronomy, Arizona State University, Tempe, AZ. Haberkern, Thomas, and N Deepak "Grains of Mystique: Quantum Physics for the Layman". Einstein Demystifies Photoelectric Effect, Chapter 3. Department of Physics, "The Photoelectric effect ". Physics 320 Laboratory, Davidson College, Davidson. Fowler, Michael, "The Photoelectric Effect". Physics 252, University of Virginia. Go to "Concerning an Heuristic Point of View Toward the Emission and Transformation of Light" to read an English translation of Einstein's 1905 paper. (Retrieved: 2014 Apr 11) http://www.chemistryexplained.com/Ru-Sp/Solar-Cells.html Photo-electric transducers: http://sensorse.com/page4en.html Applets "HTML 5 JavaScript simulator" Open Source Physics project "Photoelectric Effect". The Physics Education Technology (PhET) project. (Java) Fendt, Walter, "The Photoelectric Effect". (Java) "Applet: Photo Effect ". Open Source Distributed Learning Content Management and Assessment System. (Java) Quantum mechanics Electrical phenomena Albert Einstein Heinrich Hertz Energy conversion Photovoltaics Photochemistry Electrochemistry
Photoelectric effect
Physics,Chemistry
5,971
77,017,919
https://en.wikipedia.org/wiki/WIPO%20Treaty%20on%20Intellectual%20Property%2C%20Genetic%20Resources%20and%20Associated%20Traditional%20Knowledge
The WIPO Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge or GRATK Treaty is an international legal instrument to combat biopiracy through disclosure requirements for patent applicants whose inventions are based on genetic resources and/or associated traditional knowledge. The treaty was concluded at the headquarters of the World Intellectual Property Organization (WIPO) in Geneva, Switzerland, on 24 May 2024, after more than two decades of previous developments by WIPO's Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC). The treaty was deemed "historic in many regards" by some observers, qualified by the Indigenous Caucus as a "first step towards guaranteeing just and transparent access to these resources." Background and history 2001–2022: Work of the WIPO IGC The IGC was established in 2001 by the General Assembly of WIPO. Since 2010, the mandate of the IGC has remained that of concluding a consensual text which would bridge the gaps between the numerous existing international legal instruments provide some, but insufficient protection on either traditional knowledge, traditional cultural expressions, or genetic resources (UNDRIP, Convention on Biological Diversity, Nagoya Protocol, FAO plant treaty, UNESCO conventions on culture and intangible heritage, etc.), none of which include explicit protections for indigenous peoples and local communities. IGC's negotiations were suspended in 2020 because of the pandemic of SARS-CoV-2, and resumed in 2022. 2022: Selection on the draft text In 2022, the IGC agreed to move on to the next steps of treaty negotiation, and WIPO agreed to convene a Diplomatic Conference by 2024 to consider a draft treaty that the Committee had been working on. The selection of the draft text that had to serve as a basis for the negotiations of the final text of the treaty received some criticism from civil society observers. The 2022 WIPO General Assembly decided that a short version of the draft (the "Chair's text") which had been drafted by Australian ambassador Ian Gross, Chair of the IGC in 2019, would be the basis for the treaty's negotiations. Prior to that decision, the text which was expected to be used as basis for the negotiations was the "Consolidated text", a more comprehensive document on which IGC Member States had been working on by consensus during years. Contrary to the Consolidated text which addressed traditional knowledge and traditional cultural expressions as such, and different forms of intellectual property, the Chair's text focused only on genetic resources and the patent system. In August 2023, India submitted a proposal with a series of amendments to the Chair’s text, aiming to add back some elements from the Consolidated text in the discussion. 2023: IGC Special Session and Preparatory Committee Ahead of the Diplomatic Conference, two extraordinary meetings were convened to prepare the Conference: Special Session of the IGC (4–8 September 2023) Preparatory Committee of the Diplomatic Conference (11–13 September, and 13 December 2023). The Special Session which took place from 4 to 8 September 2023, reviewed part of the Chair's text containing substantive articles. The Preparatory Committee which was held the week after, addressed administrative and procedural parts of the draft. Jointly, these two meetings yielded a revised draft, which serves as the basis for the 2024 Diplomatic Conference discussions. The Preparatory Committee also adopted Draft Rules of Procedure for the Diplomatic Conference, as well as a List of Invitees. On 13 September 2023, the committee had to suspend its session due to the absence of submission by Member States of proposals to host the Diplomatic Conference. On 13 December, the committee reconvened to adopt a decision to hold the Diplomatic Conference at WIPO's headquarters in Geneva, facing the lack of alternative proposals. Diplomatic Conference and adoption in 2024 Convening and organization As explained on the website of the Diplomatic Conference:On July 21, 2022, the WIPO General Assembly decided to convene a Diplomatic Conference to conclude an International Legal Instrument Relating to Intellectual Property, Genetic Resources and Traditional Knowledge Associated with Genetic Resources no later than 2024.The Diplomatic Conference was held in Geneva, Switzerland, between 13 and 24 May 2024. During the Conference, the draft resulting from the Special Session and Preparatory Committee was discussed and amended. Participation at the Diplomatic Conference Adoption and signatures The final legal instrument, the WIPO Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge (often referred to by its acronym "GRATK") was adopted in the night of Thursday 23 to Friday 24 May 2024, and opened for signature the 24 May in the afternoon, at the WIPO headquarter in Geneva.This is the first WIPO Treaty to address the interface between intellectual property, genetic resources and traditional knowledge and the first WIPO Treaty to include provisions specifically for Indigenous Peoples as well as local communities. The Treaty, once it enters into force with 15 contracting parties, will establish in international law a new disclosure requirement for patent applicants whose inventions are based on genetic resources and/or associated traditional knowledge.The Treaty was concluded on 24 May 2024 and immediately opened for signature. Under the Treaty's Article 16, it is stated that the Treaty will be "open for signature at the Diplomatic Conference in Geneva and thereafter […] for one year after its adoption." At the closing of the Diplomatic Conference, on 24 May 2024, the Treaty was signed by 30 countries: Algeria, Bosnia and Herzegovina, Brazil, Burkina Faso, Central African Republic, Chile, Colombia, Congo, Cote d'Ivoire, Eswatini, Ghana, Lesotho, Madagascar, Malawi, Marshall Islands, Morocco, Namibia, Nicaragua, Niger, Nigeria, Niue, North Korea, Paraguay, Saint Vincent & the Grenadines, São Tomé and Príncipe, Senegal, South Africa, Tanzania, Uruguay, and Vanuatu. Ratifications and entry into force Under Article 17, the Treaty is planned to enter into force 3 months after ratification or accession by 15 countries. Signature, ratification and accession is open to any Member State of the WIPO, under the Treaty's Article 12. Countries that sign the Treaty within the first year period (until 24 May 2025) have to further ratify it in order for the Treaty to enter into force. Countries deciding to join after the initial one-year period will join through "adhesion" (equivalent to both signature and ratification). Legal provisions Preamble and objectives Disclosures requirements (Article 3) Matters of retroactivity (Article 4) Sanctions and remedies (Article 5) Databases and information systems (Article 6) Relationships with other treaties (Article 7) Review of the scope and contents of the Treaty (Article 8) and other forms of amendment (Articles 14 and 15) Assembly of Contracting Parties (Article 10) Secretariat (Article 11) See also World Intellectual Property Organization – Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore Biopiracy –– Bioprospecting Patents –– Patent Law Treaty –– Patent Cooperation Treaty Nagoya Protocol –– High Seas Treaty UNDRIP –– UNDROP Further reading References External links Official WIPO webpage on the GRATK 2024 treaties Anti-biopiracy treaties Biodiversity Biopiracy Convention on Biological Diversity Environmental treaties Intellectual property treaties GRATK Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge Patent law treaties Treaties concluded in 2024 Treaties not entered into force Treaties of Malawi Treaties concluded in Geneva GRATK Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge
WIPO Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge
Biology
1,540
66,600,957
https://en.wikipedia.org/wiki/Neolecta%20vitellina
Neolecta vitellina is a species of fungus belonging to the family Neolectaceae. It has cosmopolitan distribution. References Ascomycota Fungus species
Neolecta vitellina
Biology
35
63,036,934
https://en.wikipedia.org/wiki/Reverse%20Mathematics%3A%20Proofs%20from%20the%20Inside%20Out
Reverse Mathematics: Proofs from the Inside Out is a book by John Stillwell on reverse mathematics, the process of examining proofs in mathematics to determine which axioms are required by the proof. It was published in 2018 by the Princeton University Press. Topics The book begins with a historical overview of the long struggles with the parallel postulate in Euclidean geometry, and of the foundational crisis of the late 19th and early 20th centuries, Then, after reviewing background material in real analysis and computability theory, the book concentrates on the reverse mathematics of theorems in real analysis, including the Bolzano–Weierstrass theorem, the Heine–Borel theorem, the intermediate value theorem and extreme value theorem, the Heine–Cantor theorem on uniform continuity, the Hahn–Banach theorem, and the Riemann mapping theorem. These theorems are analyzed with respect to three of the "big five" subsystems of second-order arithmetic, namely arithmetical comprehension, recursive comprehension, and the weak Kőnig's lemma. Audience The book is aimed at a "general mathematical audience" including undergraduate mathematics students with an introductory-level background in real analysis. It is intended both to excite mathematicians, physicists, and computer scientists about the foundational issues in their fields, and to provide an accessible introduction to the subject. However, it is not a textbook; for instance, it has no exercises. One theme of the book is that many theorems in this area require axioms in second-order arithmetic that encompass infinite processes and uncomputable functions. Reception and related reading Jeffry Hirst criticizes the book, writing that "if one is not too obsessive about the details, Proofs from the Inside Out is an interesting introduction," while finding details that he would prefer to be handled differently, in a topic for which details are important. In particular, in this area, there are multiple choices for how to build up the arithmetic on real numbers from simpler data types such as the natural numbers, and while Stillwell discusses three of them (decimal numerals, Dedekind cuts, and nested intervals), converting between them itself requires nontrivial axiomatic assumptions. However, James Case calls the book "very readable", and Roman Kossak calls it "a stellar example of expository writing on mathematics". Several other reviewers agree that this book could be helpful as a non-technical way to create interest in this topic in mathematicians who are not already familiar with it, and lead them to more in-depth material in this area. As additional reading on reverse mathematics in combinatorics, Hirst suggests Slicing the Truth by Denis Hirschfeldt. Another book suggested by reviewer Reinhard Kahle is Stephen G. Simpson's Subsystems of Second Order Arithmetic. References Mathematical logic Proof theory Computability theory Real analysis Mathematics books 2018 non-fiction books Princeton University Press books
Reverse Mathematics: Proofs from the Inside Out
Mathematics
599
27,806,268
https://en.wikipedia.org/wiki/Water%20window
The water window is a region of the electromagnetic spectrum in which water is transparent to soft x-rays. The window extends from the K-absorption edge of carbon at 282 eV (68 PHz, 4.40 nm wavelength) to the K-edge of oxygen at 533 eV (129 PHz, 2.33 nm wavelength). Water is transparent to these X-rays, but carbon and its organic compounds are absorbing. These wavelengths could be used in an x-ray microscope for viewing living specimens. This is technically challenging because few if any viable lens materials are available above extreme ultraviolet. See also electromagnetic absorption by water x-ray absorption spectroscopy References X-rays
Water window
Physics
137
22,952,896
https://en.wikipedia.org/wiki/Zonal%20spherical%20harmonics
In the mathematical study of rotational symmetry, the zonal spherical harmonics are special spherical harmonics that are invariant under the rotation through a particular fixed axis. The zonal spherical functions are a broad extension of the notion of zonal spherical harmonics to allow for a more general symmetry group. On the two-dimensional sphere, the unique zonal spherical harmonic of degree ℓ invariant under rotations fixing the north pole is represented in spherical coordinates by where is the normalized Legendre polynomial of degree , . The generic zonal spherical harmonic of degree ℓ is denoted by , where x is a point on the sphere representing the fixed axis, and y is the variable of the function. This can be obtained by rotation of the basic zonal harmonic In n-dimensional Euclidean space, zonal spherical harmonics are defined as follows. Let x be a point on the (n−1)-sphere. Define to be the dual representation of the linear functional in the finite-dimensional Hilbert space Hℓ of spherical harmonics of degree ℓ with respect to the Haar measure on the sphere with total mass (see Unit sphere). In other words, the following reproducing property holds: for all where is the Haar measure from above. Relationship with harmonic potentials The zonal harmonics appear naturally as coefficients of the Poisson kernel for the unit ball in Rn: for x and y unit vectors, where is the surface area of the (n-1)-dimensional sphere. They are also related to the Newton kernel via where and the constants are given by The coefficients of the Taylor series of the Newton kernel (with suitable normalization) are precisely the ultraspherical polynomials. Thus, the zonal spherical harmonics can be expressed as follows. If , then where are the constants above and is the ultraspherical polynomial of degree ℓ. Properties The zonal spherical harmonics are rotationally invariant, meaning that for every orthogonal transformation R. Conversely, any function on that is a spherical harmonic in y for each fixed x, and that satisfies this invariance property, is a constant multiple of the degree zonal harmonic. If Y1, ..., Yd is an orthonormal basis of , then Evaluating at gives References . Rotational symmetry Special hypergeometric functions
Zonal spherical harmonics
Physics
466
13,153,707
https://en.wikipedia.org/wiki/Accounting%20machine
An accounting machine, or bookkeeping machine or recording-adder, was generally a calculator and printer combination tailored for a specific commercial activity such as billing, payroll, or ledger. Accounting machines were widespread from the early 1900s to 1980s, but were rendered obsolete by the availability of low-cost computers such as the IBM PC. This type of machine is generally distinct from unit record equipment (some unit record tabulating machines were also called "accounting machines"). List of vendors/accounting machines Burroughs Corporation: Burroughs Sensimatic Burroughs Sensitronic Burroughs B80 Burroughs E103 Burroughs Computer F2000 Burroughs L500 Burroughs E1400 Electronic Computing/Accounting Machine with Magnetic Striped Ledger Dalton Adding Machine Company Electronics Corporation of America: Magnefile-B Magnefile-D Elliott-Fisher Federal Adding Machines IBM: IBM 632 IBM 858 Cardatype Accounting Machine IBM 6400 Series Laboratory for Electronics: The Inventory Machine II (TIM-II) Monroe Calculator Company: Model 200 Synchro-Monroe President Monrobot IX NCR Corporation: Post-Tronic Bookkeeping Machine - Class 29 Compu-Tronic Accounting Machine Accounting Machine - Class 33 Window Posting Machine - Class 42 Olivetti: General Bookkeeping Machine (GBM) J. B. Rea Company: READIX, c. 1955 Sundstrand Adding Machines Underwood: ELECOM 50 "The First Electronic Accounting Machine" ELECOM 125, 125 FP (File Processor), 1956 See also Unit record equipment References Programmable calculators Early computers Mechanical calculators
Accounting machine
Technology
324
21,479,365
https://en.wikipedia.org/wiki/Tolyl%20group
In organic chemistry, tolyl groups are functional groups related to toluene. They have the general formula , the change of the relative position of the methyl and the R substituent on the aromatic ring can generate three possible structural isomers 1,2 (ortho), 1,3 (meta), and 1,4 (para). Tolyl groups are aryl groups which are commonly found in the structure of diverse chemical compounds. They are considered nonpolar and hydrophobic groups. The functionalization to include tolyl groups into compounds is often done by Williamson etherification, using tolyl alcohols as reagents, or by C-C coupling reactions. Tolyl sulfonates are excellent leaving groups in nucleophilic substitutions, for this reason, they are commonly generated as intermediaries to activate alcohols. To this end, 4-toluenesulfonyl chloride is reacted in the presence of a base with the corresponding alcohol. References Aryl groups
Tolyl group
Chemistry
212
2,526,903
https://en.wikipedia.org/wiki/Isotopes%20of%20rhenium
Naturally occurring rhenium (75Re) is 37.4% 185Re, which is stable (although it is predicted to decay), and 62.6% 187Re, which is unstable but has a very long half-life (4.12×1010 years). Among elements with a known stable isotope, only indium and tellurium similarly occur with a stable isotope in lower abundance than the long-lived radioactive isotope. There are 36 other unstable isotopes recognized, the longest-lived of which are 183Re with a half-life of 70 days, 184Re with a half-life of 38 days, 186Re with a half-life of 3.7186 days, 182Re with a half-life of 64.0 hours, and 189Re with a half-life of 24.3 hours. There are also numerous isomers, the longest-lived of which are 186mRe with a half-life of 200,000 years and 184mRe with a half-life of 177.25 days. All others have half-lives less than a day. List of isotopes |-id=Rhenium-159 | rowspan=2|159Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 84 | rowspan=2| | rowspan=2|21(4) μs | p (92.5%) | 158W | rowspan=2|(11/2−) | rowspan=2| | rowspan=2| |- | α (7.5%) | 155Ta |-id=Rhenium-160 | rowspan=2|160Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 85 | rowspan=2|159.98212(43)# | rowspan=2|611(7) μs | p (89%) | 159W | rowspan=2|(2−) | rowspan=2| | rowspan=2| |- | α (11%) | 156Ta |-id=Rhenium-160m | style="text-indent:1em" | 160mRe | colspan="3" style="text-indent:2em" | 185(21)# keV | 2.8(1) μs | IT | 160Re | (9+) | | |-id=Rhenium-161 | 161Re | style="text-align:right" | 75 | style="text-align:right" | 86 | 160.97759(22) | 0.37(4) ms | p | 160W | 1/2+ | | |-id=Rhenium-161m | style="text-indent:1em" | 161mRe | colspan="3" style="text-indent:2em" | 123.8(13) keV | 15.6(9) ms | α | 157Ta | 11/2− | | |-id=Rhenium-162 | rowspan=2|162Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 87 | rowspan=2|161.97600(22)# | rowspan=2|107(13) ms | α (94%) | 158Ta | rowspan=2|(2−) | rowspan=2| | rowspan=2| |- | β+ (6%) | 162W |-id=Rhenium-162m | rowspan=2 style="text-indent:1em" | 162mRe | rowspan=2 colspan="3" style="text-indent:2em" | 173(10) keV | rowspan=2|77(9) ms | α (91%) | 158Ta | rowspan=2|(9+) | rowspan=2| | rowspan=2| |- | β+ (9%) | 162W |-id=Rhenium-163 | rowspan=2|163Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 88 | rowspan=2|162.972081(21) | rowspan=2|390(70) ms | β+ (68%) | 163W | rowspan=2|(1/2+) | rowspan=2| | rowspan=2| |- | α (32%) | 159Ta |-id=Rhenium-163m | rowspan=2 style="text-indent:1em" | 163mRe | rowspan=2 colspan="3" style="text-indent:2em" | 115(4) keV | rowspan=2|214(5) ms | α (66%) | 159Ta | rowspan=2|(11/2−) | rowspan=2| | rowspan=2| |- | β+ (34%) | 163W |-id=Rhenium-164 | rowspan=2|164Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 89 | rowspan=2|163.97032(17)# | rowspan=2|0.53(23) s | α (58%) | 160Ta | rowspan=2|high | rowspan=2| | rowspan=2| |- | β+ (42%) | 164W |-id=Rhenium-164m | style="text-indent:1em" | 164mRe | colspan="3" style="text-indent:2em" | 120(120)# keV | 530(230) ms | | | (2#)− | | |-id=Rhenium-165 | rowspan=2|165Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 90 | rowspan=2|164.967089(30) | rowspan=2|1# s | β+ | 165W | rowspan=2|1/2+# | rowspan=2| | rowspan=2| |- | α | 161Ta |-id=Rhenium-165m | rowspan=2 style="text-indent:1em" | 165mRe | rowspan=2 colspan="3" style="text-indent:2em" | 47(26) keV | rowspan=2|2.1(3) s | β+ (87%) | 165W | rowspan=2|11/2−# | rowspan=2| | rowspan=2| |- | α (13%) | 161Ta |-id=Rhenium-166 | rowspan=2|166Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 91 | rowspan=2|165.96581(9)# | rowspan=2|2# s | β+ | 166W | rowspan=2|2−# | rowspan=2| | rowspan=2| |- | α | 162Ta |-id=Rhenium-167 | rowspan=2|167Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 92 | rowspan=2|166.96260(6)# | rowspan=2|3.4(4) s | α | 163Ta | rowspan=2|9/2−# | rowspan=2| | rowspan=2| |- | β+ | 167W |-id=Rhenium-167m | rowspan=2 style="text-indent:1em" | 167mRe | rowspan=2 colspan="3" style="text-indent:2em" | 130(40)# keV | rowspan=2|5.9(3) s | β+ (99.3%) | 167W | rowspan=2|1/2+# | rowspan=2| | rowspan=2| |- | α (.7%) | 163Ta |-id=Rhenium-168 | rowspan=2|168Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 93 | rowspan=2|167.96157(3) | rowspan=2|4.4(1) s | β+ (99.99%) | 168W | rowspan=2|(5+, 6+, 7+) | rowspan=2| | rowspan=2| |- | α (.005%) | 164Ta |-id=Rhenium-168m | style="text-indent:1em" | 168mRe | colspan="3" style="text-indent:2em" | non-exist | 6.6(15) s | | | | | |-id=Rhenium-169 | rowspan=2|169Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 94 | rowspan=2|168.95879(3) | rowspan=2|8.1(5) s | β+ (99.99%) | 169W | rowspan=2|9/2−# | rowspan=2| | rowspan=2| |- | α (.005%) | 165Ta |-id=Rhenium-169m | rowspan=2 style="text-indent:1em" | 169mRe | rowspan=2 colspan="3" style="text-indent:2em" | 145(29) keV | rowspan=2|15.1(15) s | β+ (99.8%) | 169W | rowspan=2|1/2+# | rowspan=2| | rowspan=2| |- | α (.2%) | 164Ta |-id=Rhenium-170 | rowspan=2|170Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 95 | rowspan=2|169.958220(28) | rowspan=2|9.2(2) s | β+ (99.99%) | 170W | rowspan=2|(5+) | rowspan=2| | rowspan=2| |- | α (.01%) | 166Ta |-id=Rhenium-171 | 171Re | style="text-align:right" | 75 | style="text-align:right" | 96 | 170.95572(3) | 15.2(4) s | β+ | 171W | (9/2−) | | |-id=Rhenium-172 | 172Re | style="text-align:right" | 75 | style="text-align:right" | 97 | 171.95542(6) | 15(3) s | β+ | 172W | (5) | | |-id=Rhenium-172m | style="text-indent:1em" | 172mRe | colspan="3" style="text-indent:2em" | 0(100)# keV | 55(5) s | β+ | 172W | (2) | | |-id=Rhenium-173 | 173Re | style="text-align:right" | 75 | style="text-align:right" | 98 | 172.95324(3) | 1.98(26) min | β+ | 173W | (5/2−) | | |-id=Rhenium-174 | 174Re | style="text-align:right" | 75 | style="text-align:right" | 99 | 173.95312(3) | 2.40(4) min | β+ | 174W | | | |-id=Rhenium-175 | 175Re | style="text-align:right" | 75 | style="text-align:right" | 100 | 174.95138(3) | 5.89(5) min | β+ | 175W | (5/2−) | | |-id=Rhenium-176 | 176Re | style="text-align:right" | 75 | style="text-align:right" | 101 | 175.95162(3) | 5.3(3) min | β+ | 176W | 3+ | | |-id=Rhenium-177 | 177Re | style="text-align:right" | 75 | style="text-align:right" | 102 | 176.95033(3) | 14(1) min | β+ | 177W | 5/2− | | |-id=Rhenium-177m | style="text-indent:1em" | 177mRe | colspan="3" style="text-indent:2em" | 84.71(10) keV | 50(10) μs | | | 5/2+ | | |-id=Rhenium-178 | 178Re | style="text-align:right" | 75 | style="text-align:right" | 103 | 177.95099(3) | 13.2(2) min | β+ | 178W | (3+) | | |-id=Rhenium-179 | 179Re | style="text-align:right" | 75 | style="text-align:right" | 104 | 178.949988(26) | 19.5(1) min | β+ | 179W | (5/2)+ | | |-id=Rhenium-179m1 | style="text-indent:1em" | 179m1Re | colspan="3" style="text-indent:2em" | 65.39(9) keV | 95(25) μs | | | (5/2−) | | |-id=Rhenium-179m2 | style="text-indent:1em" | 179m2Re | colspan="3" style="text-indent:2em" | 1684.59(14)+Y keV | >0.4 μs | | | (23/2+) | | |-id=Rhenium-180 | 180Re | style="text-align:right" | 75 | style="text-align:right" | 105 | 179.950789(23) | 2.44(6) min | β+ | 180W | (1)− | | |-id=Rhenium-181 | 181Re | style="text-align:right" | 75 | style="text-align:right" | 106 | 180.950068(14) | 19.9(7) h | β+ | 181W | 5/2+ | | |-id=Rhenium-182 | 182Re | style="text-align:right" | 75 | style="text-align:right" | 107 | 181.95121(11) | 64.0(5) h | β+ | 182W | 7+ | | |-id=Rhenium-182m1 | style="text-indent:1em" | 182m1Re | colspan="3" style="text-indent:2em" | 60(100) keV | 12.7(2) h | β+ | 182W | 2+ | | |-id=Rhenium-182m2 | style="text-indent:1em" | 182m2Re | colspan="3" style="text-indent:2em" | 235.736(10)+X keV | 585(21) ns | | | 2− | | |-id=Rhenium-182m3 | style="text-indent:1em" | 182m3Re | colspan="3" style="text-indent:2em" | 461.3(1)+X keV | 0.78(9) μs | | | (4−) | | |-id=Rhenium-183 | 183Re | style="text-align:right" | 75 | style="text-align:right" | 108 | 182.950820(9) | 70.0(14) d | EC | 183W | 5/2+ | | |-id=Rhenium-183m | style="text-indent:1em" | 183mRe | colspan="3" style="text-indent:2em" | 1907.6(3) keV | 1.04(4) ms | IT | 183Re | (25/2+) | | |-id=Rhenium-184 | 184Re | style="text-align:right" | 75 | style="text-align:right" | 109 | 183.952521(5) | 35.4(7) d | β+ | 184W | 3(−) | | |-id=Rhenium-184m | rowspan=2 style="text-indent:1em" | 184mRe | rowspan=2 colspan="3" style="text-indent:2em" | 188.01(4) keV | rowspan=2|177.25(7) d | IT (75.4%) | 184Re | rowspan=2|8(+) | rowspan=2| | rowspan=2| |- | β+ (24.6%) | 184W |-id=Rhenium-185 | 185Re | style="text-align:right" | 75 | style="text-align:right" | 110 | 184.9529550(13) | colspan=3 align=center|Observationally Stable | 5/2+ | 0.3740(2) | |-id=Rhenium-185m | style="text-indent:1em" | 185mRe | colspan="3" style="text-indent:2em" | 2124(2) keV | 123(23) ns | | | (21/2) | | |- | rowspan=2|186Re | rowspan=2 style="text-align:right" | 75 | rowspan=2 style="text-align:right" | 111 | rowspan=2|185.9549861(13) | rowspan=2|3.7186(5) d | β− (93.1%) | 186Os | rowspan=2|1− | rowspan=2| | rowspan=2| |- | EC (6.9%) | 186W |-id=Rhenium-186m | style="text-indent:1em" | 186mRe | colspan="3" style="text-indent:2em" | 149(7) keV | 2.0(5)×105 y | IT | 186Re | (8+) | | |-id=Rhenium-187 | 187Re | style="text-align:right" | 75 | style="text-align:right" | 112 | 186.9557531(15) | 4.12(2)×1010 y | β− | 187Os | 5/2+ | 0.6260(2) | |-id=Rhenium-188 | 188Re | style="text-align:right" | 75 | style="text-align:right" | 113 | 187.9581144(15) | 17.0040(22) h | β− | 188Os | 1− | | |-id=Rhenium-188m | style="text-indent:1em" | 188mRe | colspan="3" style="text-indent:2em" | 172.069(9) keV | 18.59(4) min | IT | 188Re | (6)− | | |-id=Rhenium-189 | 189Re | style="text-align:right" | 75 | style="text-align:right" | 114 | 188.959229(9) | 24.3(4) h | β− | 189Os | 5/2+ | | |-id=Rhenium-190 | 190Re | style="text-align:right" | 75 | style="text-align:right" | 115 | 189.96182(16) | 3.1(3) min | β− | 190Os | (2)− | | |-id=Rhenium-190m | rowspan=2 style="text-indent:1em" | 190mRe | rowspan=2 colspan="3" style="text-indent:2em" | 210(50) keV | rowspan=2|3.2(2) h | β− (54.4%) | 190Os | rowspan=2|(6−) | rowspan=2| | rowspan=2| |- | IT (45.6%) | 190Re |-id=Rhenium-191 | 191Re | style="text-align:right" | 75 | style="text-align:right" | 116 | 190.963125(11) | 9.8(5) min | β− | 191Os | (3/2+, 1/2+) | | |-id=Rhenium-192 | 192Re | style="text-align:right" | 75 | style="text-align:right" | 117 | 191.96596(21)# | 16(1) s | β− | 192Os | | | |-id=Rhenium-193 | 193Re | style="text-align:right" | 75 | style="text-align:right" | 118 | 192.96747(21)# | 30# s [>300 ns] | | | 5/2+# | | |-id=Rhenium-194 | 194Re | style="text-align:right" | 75 | style="text-align:right" | 119 | 193.97042(32)# | 2# s [>300 ns] | | | | | Rhenium-186 Rhenium-186 is a beta emitter and radiopharmaceutical that is used to treat glioblastoma, is used in theranostic medicine and has been reported to be used in synoviorthesis. References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Rhenium Rhenium
Isotopes of rhenium
Chemistry
5,242
5,596,641
https://en.wikipedia.org/wiki/Horse%20behavior
Horse behavior is best understood from the view that horses are prey animals with a well-developed fight-or-flight response. Their first reaction to a threat is often to flee, although sometimes they stand their ground and defend themselves or their offspring in cases where flight is untenable, such as when a foal would be threatened. Nonetheless, because of their physiology horses are also suited to a number of work and entertainment-related tasks. Humans domesticated horses thousands of years ago, and they have been used by humans ever since. Through selective breeding, some breeds of horses have been bred to be quite docile, particularly certain large draft horses. On the other hand, most light horse riding breeds were developed for speed, agility, alertness, and endurance; building on natural qualities that extended from their wild ancestors. Horses' instincts can be used to human advantage to create a bond between human and horse. These techniques vary, but are part of the art of horse training. The "fight-or-flight" response Horses evolved from small mammals whose survival depended on their ability to flee from predators (for example: wolves, big cats, bears). This survival mechanism still exists in the modern domestic horse. Humans have removed many predators from the life of the domestic horse; however, its first instinct when frightened is to escape. If running is not possible, the horse resorts to biting, kicking, striking or rearing to protect itself. Many of the horse's natural behavior patterns, such as herd-formation and social facilitation of activities, are directly related to their being a prey species. The fight-or-flight response involves nervous impulses which result in hormone secretions into the bloodstream. When a horse reacts to a threat, it may initially "freeze" in preparation to take flight. The fight-or-flight reaction begins in the amygdala, which triggers a neural response in the hypothalamus. The initial reaction is followed by activation of the pituitary gland and secretion of the hormone ACTH. The adrenal gland is activated almost simultaneously and releases the neurotransmitters epinephrine (adrenaline) and norepinephrine (noradrenaline). The release of chemical messengers results in the production of the hormone cortisol, which increases blood pressure and blood sugar, and suppresses the immune system. Catecholamine hormones, such as epinephrine and norepinephrine, facilitate immediate physical reactions associated with a preparation for violent muscular action. The result is a rapid rise in blood pressure, resulting in an increased supply of oxygen and glucose for energy to the brain and skeletal muscles, the most vital organs the horse needs when fleeing from a perceived threat. However, the increased supply of oxygen and glucose to these areas is at the expense of "non-essential" flight organs, such as the skin and abdominal organs. Once the horse has removed itself from immediate danger, the body is returned to more "normal" conditions via the parasympathetic nervous system. This is triggered by the release of endorphins into the brain, and it effectively reverses the effects of noradrenaline – metabolic rate, blood pressure and heart rate all decrease and the increased oxygen and glucose being supplied to the muscles and brain are returned to normal. This is also known as the "rest and digest" state. As herd animals Horses are highly social herd animals that prefer to live in a group. An older theory of hierarchy in herd of horses is the "linear dominance hierarchy". Newer research shows that there is no "pecking order" in horse herds. Free ranging, wild horses are mostly communicating via positive reinforcement and less via punishment. Horses are able to form companionship attachments not only to their own species, but with other animals as well, most notably humans. In fact, many domesticated horses will become anxious, flighty, and hard to manage if they are isolated. Horses kept in near-complete isolation, particularly in a closed stable where they cannot see other animals, may require a stable companion such as a cat, goat, or even a small pony or donkey, to provide company and reduce stress. When anxiety over separation occurs while a horse is being handled by a human, the horse is described as "herd-bound". However, through proper training, horses learn to be comfortable away from other horses, often because they learn to trust a human handler. Horses are able to trust a human handler. Since it is not possible to form interspecies herds, humans cannot be part of a horse herd hierarchy and therefore can never take the place of "lead-mares" or "lead-stallions". Social organization in the wild Feral and wild horse "herds" are usually made up of several separate, small "bands" which share a territory. Size may range from two to 25 individuals, mostly mares and their offspring, with one to five stallions. Bands are defined as a harem model. Each band is led by a dominant mare (sometimes called the "lead mare" or the "boss mare"). The composition of bands changes as young animals are driven out of their natal band and join other bands, or as stallions challenge each other for dominance. In bands, there is usually a single "herd" or "lead" stallion, though occasionally a few less-dominant males may remain on the fringes of the group. The reproductive success of the lead stallion is determined in part by his ability to prevent other males from mating with the mares of his harem. The stallion also exercises protective behavior, patrolling around the band, and taking the initiative when the band encounters a potential threat. The stability of the band is not affected by size, but tends to be more stable when there are subordinate stallions attached to the harem. In modern reintroduced populations of Przewalski's horse, the only remaining truly wild horse, family groups are formed by one adult stallion, one to three mares, and their common offspring that stay in the family group until they are no longer dependent, usually at two or three years old. Hierarchical structure Horses have evolved to live in herds. As with many animals that live in large groups, establishment of a stable hierarchical system or "pecking order" is important to reduce aggression and increase group cohesion. This is often, but not always, a linear system. In non-linear hierarchies horse A may be dominant over horse B, who is dominant over horse C, yet horse C may be dominant over horse A. Dominance can depend on a variety of factors, including an individual's need for a particular resource at a given time. It can therefore be variable throughout the lifetime of the herd or individual animal. Some horses may be dominant over all resources and others may be submissive for all resources. This is not part of natural horse behavior. It is forced by humans forcing horses to live together in limited space with limited resources. So called "dominant horses" are often horses with dysfunctional social abilities - caused by human intervention in their early lives (weaning, stable isolation, etc.). Once a dominance hierarchy is established, horses more often than not will travel in rank order. Most young horses in the wild are allowed to stay with the herd until they reach sexual maturity, usually in their first or second year. Studies of wild herds have shown that the herd stallion will usually drive out both colts and fillies; this may be an instinct that prevents inbreeding. The fillies usually join another band soon afterward, and the colts driven out from several herds usually join in small "bachelor" groups until those who are able to establish dominance over an older stallion in another herd. Role of the lead mare Contrary to popular belief, the herd stallion is not the "ruler" of a harem of females, though he usually engages in herding and protective behavior. Rather, the horse that tends to lead a wild or feral herd is most commonly a dominant mare. The mare "guides the herd to food and water, controls the daily routine and movement of the herd, and ensures the general wellbeing of the herd." A recent supplemental theory posits that there is "distributed leadership", and no single individual is a universal herd leader. A 2014 study of horses in Italy, described as "feral" by the researcher, observed that some herd movements may be initiated by any individual, although higher-ranked members are followed more often by other herd members. Role of the stallion Stallions tend to stay on the periphery of the herd where they fight off both predators and other males. When the herd travels, the stallion is usually at the rear and apparently drives straggling herd members forward, keeping the herd together. Mares and lower-ranked males do not usually engage in this herding behavior. During the mating season, stallions tend to act more aggressively to keep the mares within the herd, however, most of the time, the stallion is relaxed and spends much of his time "guarding" the herd by scent-marking manure piles and urination spots to communicate his dominance as herd stallion. Ratio of stallions and mares Domesticated stallions, with human management, often mate with ("cover") more mares in a year than is possible in the wild. Traditionally, thoroughbred stud farms limited stallions to breeding with between 40 and 60 mares a year. By breeding mares only at the peak of their estrous cycle, a few thoroughbred stallions have mated with over 200 mares per year. With use of artificial insemination, one stallion could potentially sire thousands of offspring annually, though in practice, economic considerations usually limit the number of foals produced. Domesticated stallion behavior Some breeders keep horses in semi-natural conditions, with a single stallion amongst a group of mares. This is referred to as "pasture breeding." Young immature stallions are kept in a separate "bachelor herd." While this has advantages of less intensive labor for human caretakers, and full-time turnout (living in pasture) may be psychologically healthy for the horses, pasture breeding presents a risk of injury to valuable breeding stock, both stallions and mares, particularly when unfamiliar animals are added to the herd. It also raises questions of when or if a mare is bred, and may also raise questions as to parentage of foals. Therefore, keeping stallions in a natural herd is not common, especially on breeding farms mating multiple stallions to mares from other herds. Natural herds are more often kept on farms with closed herds, i.e. only one or a few stallions with a stable mare herd and few, if any, mares from other herds. Mature, domesticated stallions are commonly kept by themselves in a stable or small paddock. When stallions are stabled in a manner that allows visual and tactile communication, they will often challenge each another and sometimes attempt to fight. Therefore, stallions are often kept isolated from each other to reduce the risk of injury and disruption to the rest of the stable. If stallions are provided with access to paddocks, there is often a corridor between the paddocks so the stallions cannot touch each other. In some cases, stallions are released for exercise at different times of the day to ensure they do not see or hear each another. To avoid stable vices associated with isolation, some stallions are provided with a non-horse companion, such as a castrated donkey or a goat (the Godolphin Arabian was particularly fond of a barn cat). While many domesticated stallions become too aggressive to tolerate the close presence of any other male horse without fighting, some tolerate a gelding as a companion, particularly one that has a very calm temperament. One example of this was the racehorse Seabiscuit, who lived with a gelding companion named "Pumpkin". Other stallions may tolerate the close presence of an immature and less dominant stallion. Stallions and mares often compete together at horse shows and in horse races, however, stallions generally must be kept away from close contact with mares, both to avoid unintentional or unplanned matings, and away from other stallions to minimize fighting for dominance. When horses are lined up for award presentations at shows, handlers keep stallions at least one horse length from any other animal. Stallions can be taught to ignore mares or other stallions that are in close proximity while they are working. Stallions live peacefully in bachelor herds in the wild and in natural management settings. For example, the stallions in the New Forest (U.K.) live in bachelor herds on their winter grazing pastures. When managed as domesticated animals, some farms assert that carefully managed social contact benefits stallions. Well-tempered stallions intended to be kept together for a long period may be stabled in closer proximity, though this method of stabling is generally used only by experienced stable managers. An example of this is the stallions of the Spanish Riding School, which travel, train and are stabled in close proximity. In these settings, more dominant animals are kept apart by stabling a young or less dominant stallion in the stall between them. Dominance in domesticated herds Because domestication of the horse usually requires stallions to be isolated from other horses, either mares or geldings may become dominant in a domestic herd. Usually dominance in these cases is a matter of age and, to some extent, temperament. It is common for older animals to be dominant, though old and weak animals may lose their rank in the herd. There are also studies suggesting that a foal will "inherit" or perhaps imprint dominance behavior from its dam, and at maturity seek to obtain the same rank in a later herd that its mother held when the horse was young. Studies of domesticated horses indicate that horses appear to benefit from a strong female presence in the herd. Groupings of all geldings, or herds where a gelding is dominant over the rest of the herd; for example if the mares in the herd are quite young or of low status, may be more anxious as a group and less relaxed than those where a mare is dominant. Communication Horses communicate in various ways, including vocalizations such as nickering, squealing or whinnying; touch, through mutual grooming or nuzzling; smell; and body language. Horses use a combination of ear position, neck and head height, movement, and foot stomping or tail swishing to communicate. Discipline is maintained in a horse herd first through body language and gestures, then, if needed, through physical contact such as biting, kicking, nudging, or other means of forcing a misbehaving herd member to move. In most cases, the animal that successfully causes another to move is dominant, whether it uses only body language or adds physical reinforcement. Horses can interpret the body language of other creatures, including humans, whom they view as predators. If socialized to human contact, horses usually respond to humans as a non-threatening predator. Humans do not always understand this, however, and may behave in a way, particularly if using aggressive discipline, that resembles an attacking predator and triggers the horse's fight-or-flight response. On the other hand, some humans exhibit fear of a horse, and a horse may interpret this behavior as human submission to the authority of the horse, placing the human in a subordinate role in the horse's mind. This may lead the horse to behave in a more dominant and aggressive fashion. Human handlers are more successful if they learn to properly interpret a horse's body language and temper their own responses accordingly. Some methods of horse training explicitly instruct horse handlers to behave in ways that the horse will interpret as the behavior of a trusted leader in a herd and thus more willingly comply with commands from a human handler. Other methods encourage operant conditioning to teach the horse to respond in a desired way to human body language, but also teach handlers to recognize the meaning of horse body language. Horses are not particularly vocal, but do have four basic vocalizations: the neigh or whinny, the nicker, the squeal and the snort. They may also make sighing, grunting or groaning noises at times. Ear position is often one of the most obvious behaviors that humans notice when interpreting horse body language. In general, a horse will direct the pinna of an ear toward the source of input it is also looking at. Horses have a narrow range of binocular vision, and thus a horse with both ears forward is generally concentrating on something in front of it. Similarly, when a horse turns both ears forward, the degree of tension in the horse's pinna suggests if the animal is calmly attentive to its surroundings or tensely observing a potential danger. However, because horses have strong monocular vision, it is possible for a horse to position one ear forward and one ear back, indicative of similar divided visual attention. This behavior is often observed in horses while working with humans, where they need to simultaneously focus attention on both their handler and their surroundings. A horse may turn the pinna back when also seeing something coming up behind it. Due to the nature of a horse's vision, head position may indicate where the animal is focusing attention. To focus on a distant object, a horse will raise its head. To focus on an object close by, and especially on the ground, the horse will lower its nose and carry its head in a near-vertical position. Eyes rolled to the point that the white of the eye is visible often indicates fear or anger. Ear position, head height, and body language may change to reflect emotional status as well. For example, the clearest signal a horse sends is when both ears are flattened tightly back against the head, sometimes with eyes rolled so that the white of the eye shows, often indicative of pain or anger, frequently foreshadowing aggressive behavior that will soon follow. Sometimes ears laid back, especially when accompanied by a strongly swishing tail or stomping or pawing with the feet are signals used by the horse to express discomfort, irritation, impatience, or anxiety. However, horses with ears slightly turned back but in a loose position, may be drowsing, bored, fatigued, or simply relaxed. When a horse raises its head and neck, the animal is alert and often tense. A lowered head and neck may be a sign of relaxation, but depending on other behaviors may also indicate fatigue or illness. Tail motion may also be a form of communication. Slight tail swishing is often a tool to dislodge biting insects or other skin irritants. However, aggressive tail-swishing may indicate either irritation, pain or anger. The tail tucked tightly against the body may indicate discomfort due to cold or, in some cases, pain. The horse may demonstrate tension or excitement by raising its tail, but also by flaring its nostrils, snorting, and intently focusing its eyes and ears on the source of concern. The horse does not use its mouth to communicate to the degree that it uses its ears and tail, but a few mouth gestures have meaning beyond that of eating, grooming, or biting at an irritation. Bared teeth, as noted above, are an expression of anger and an imminent attempt to bite. Horses, particularly foals, sometimes indicate appeasement of a more aggressive herd member by extending their necks and clacking their teeth. Horses making a chewing motion with no food in the mouth do so as a soothing mechanism, possibly linked to a release of tension, though some horse trainers view it as an expression of submission. Horses will sometimes extend their upper lip when scratched in a particularly good spot, and if their mouth touches something at the time, their lip and teeth may move in a mutual grooming gesture. A very relaxed or sleeping horse may have a loose lower lip and chin that may extend further out than the upper lip. The curled lip flehmen response, noted above, most often is seen in stallions, but is usually a response to the smell of another horse's urine, and may be exhibited by horses of any sex. Horses also have assorted mouth motions that are a response to a bit or the rider's hands, some indicating relaxation and acceptance, others indicating tension or resistance. Sleep patterns Horses can sleep both standing up and lying down. They can sleep while standing, an adaptation from life as a prey animal in the wild. Lying down makes an animal more vulnerable to predators. Horses are able to sleep standing up because a "stay apparatus" in their legs allows them to relax their muscles and doze without collapsing. In the front legs, their equine forelimb anatomy automatically engages the stay apparatus when their muscles relax. The horse engages the stay apparatus in the hind legs by shifting its hip position to lock the patella in place. At the stifle joint, a "hook" structure on the inside bottom end of the femur cups the patella and the medial patella ligament, preventing the leg from bending. Horses obtain needed sleep by many short periods of rest. This is to be expected of a prey animal, that needs to be ready on a moment's notice to flee from predators. Horses may spend anywhere from four to fifteen hours a day in standing rest, and from a few minutes to several hours lying down. However, not all this time is the horse asleep; total sleep time in a day may range from several minutes to two hours. Horses require approximately two and a half hours of sleep, on average, in a 24-hour period. Most of this sleep occurs in many short intervals of about 15 minutes each. These short periods of sleep consist of five minutes of slow-wave sleep, followed by five minutes of rapid eye movement sleep (REM) and then another five minutes of slow-wave sleep. Horses must lie down to reach REM sleep. They only have to lie down for an hour or two every few days to meet their minimum REM sleep requirements. However, if a horse is never allowed to lie down, after several days it will become sleep-deprived, and in rare cases may suddenly collapse as it involuntarily slips into REM sleep while still standing. This condition differs from narcolepsy, which horses may suffer from. Horses sleep better when in groups because some animals will sleep while others stand guard to watch for predators. A horse kept entirely alone may not sleep well because its instincts are to keep a constant eye out for danger. Eating patterns Horses have a strong grazing instinct, preferring to spend most hours of the day eating forage. Horses and other equids evolved as grazing animals, adapted to eating small amounts of the same kind of food all day long. In the wild, the horse adapted to eating prairie grasses in semi-arid regions and traveling significant distances each day in order to obtain adequate nutrition. Thus, they are "trickle eaters," meaning they have to have an almost constant supply of food to keep their digestive system working properly. Horses can become anxious or stressed if there are long periods of time between meals. When stabled, they do best when they are fed on a regular schedule; they are creatures of habit and easily upset by changes in routine. When horses are in a herd, their behavior is hierarchical; the higher-ranked animals in the herd eat and drink first. Low-status animals, that eat last, may not get enough food, and if there is little available feed, higher-ranking horses may keep lower-ranking ones from eating at all. Psychological disorders When confined with insufficient companionship, exercise or stimulation, horses may develop stable vices, an assortment of compulsive stereotypies considered bad habits, mostly psychological in origin, that include wood chewing, stall walking (walking in circles stressfully in the stall), wall kicking, "weaving" (rocking back and forth) and other problems. These have been linked to a number of possible causal factors, including a lack of environmental stimulation and early weaning practices. Research is ongoing to investigate the neurobiological changes involved in the performance of these behaviors. See also Domestication of the horse Equus (genus) Glossary of equestrian terms Horse Horse breeding Horse care Horse training Sacking out Stable vices Equine intelligence Notes References Budiansky, Stephen. "The Nature of Horses". Free Press, 1997. McCall C.A (Professor of Animal Sciences, Auburn University) 2006, Understanding your horses’ behaviour, Alabama Co-operative Extension System, Alabama, viewed 21/10/13, External links The Horse Trust - Equine Clinical Animal Behaviour Hub Basics of Equine Behaviour - Equine Behaviour & Training Association Case Studies of Equine Behaviour - FAB Clinicians Ethology
Horse behavior
Biology
5,083
66,647,902
https://en.wikipedia.org/wiki/Leucopaxillus%20compactus
Leucopaxillus compactus is a species of fungus belonging to the family Tricholomataceae. It is native to Europe. References Tricholomataceae Fungus species
Leucopaxillus compactus
Biology
38
6,069,126
https://en.wikipedia.org/wiki/Time%20perception
In psychology and neuroscience, time perception or chronoception is the subjective experience, or sense, of time, which is measured by someone's own perception of the duration of the indefinite and unfolding of events. The perceived time interval between two successive events is referred to as perceived duration. Though directly experiencing or understanding another person's perception of time is not possible, perception can be objectively studied and inferred through a number of scientific experiments. Some temporal illusions help to expose the underlying neural mechanisms of time perception. The ancient Greeks recognized the difference between chronological time (chronos) and subjective time (kairos). Pioneering work on time perception, emphasizing species-specific differences, was conducted by Karl Ernst von Baer. Theories Time perception is typically categorized in three distinct ranges, because different ranges of duration are processed in different areas of the brain: Sub-second timing or millisecond timing Interval timing or seconds-to-minutes timing Circadian timing There are many theories and computational models for time perception mechanisms in the brain. William J. Friedman (1993) contrasted two theories of the sense of time: The strength model of time memory. This posits a memory trace that persists over time, by which one might judge the age of a memory (and therefore how long ago the event remembered occurred) from the strength of the trace. This conflicts with the fact that memories of recent events may fade more quickly than more distant memories. The inference model suggests the time of an event is inferred from information about relations between the event in question and other events whose date or time is known. Another hypothesis involves the brain's subconscious tallying of "pulses" during a specific interval, forming a biological stopwatch. This theory proposes that the brain can run multiple biological stopwatches independently depending on the type of tasks being tracked. The source and nature of the pulses is unclear. They are as yet a metaphor whose correspondence to brain anatomy or physiology is unknown. Philosophical perspectives The specious present is the time duration wherein a state of consciousness is experienced as being in the present. The term was first introduced by the philosopher E. R. Clay in 1882 (E. Robert Kelly), and was further developed by William James. James defined the specious present to be "the prototype of all conceived times... the short duration of which we are immediately and incessantly sensible". In "Scientific Thought" (1930), C. D. Broad further elaborated on the concept of the specious present and considered that the specious present may be considered as the temporal equivalent of a sensory datum. A version of the concept was used by Edmund Husserl in his works and discussed further by Francisco Varela based on the writings of Husserl, Heidegger, and Merleau-Ponty. Although the perception of time is not associated with a specific sensory system, psychologists and neuroscientists suggest that humans do have a system, or several complementary systems, governing the perception of time. Time perception is handled by a highly distributed system involving the cerebral cortex, cerebellum and basal ganglia. One particular component, the suprachiasmatic nucleus, is responsible for the circadian (or daily) rhythm, while other cell clusters appear to be capable of shorter (ultradian) timekeeping. There is some evidence that very short (millisecond) durations are processed by dedicated neurons in early sensory parts of the brain. Warren Meck devised a physiological model for measuring the passage of time. He found the representation of time to be generated by the oscillatory activity of cells in the upper cortex. The frequency of these cells' activity is detected by cells in the dorsal striatum at the base of the forebrain. His model separated explicit timing and implicit timing. Explicit timing is used in estimating the duration of a stimulus. Implicit timing is used to gauge the amount of time separating one from an impending event that is expected to occur in the near future. These two estimations of time do not involve the same neuroanatomical areas. For example, implicit timing often occurs to achieve a motor task, involving the cerebellum, left parietal cortex, and left premotor cortex. Explicit timing often involves the supplementary motor area and the right prefrontal cortex. Two visual stimuli, inside someone's field of view, can be successfully regarded as simultaneous up to five milliseconds. In the popular essay "Brain Time", David Eagleman explains that different types of sensory information (auditory, tactile, visual, etc.) are processed at different speeds by different neural architectures. The brain must learn how to overcome these speed disparities if it is to create a temporally unified representation of the external world: Experiments have shown that rats can successfully estimate a time interval of approximately 40 seconds, despite having their cortex entirely removed. This suggests that time estimation may be a low-level process. Ecological perspectives In recent history, ecologists and psychologists have been interested in whether and how time is perceived by non-human animals, as well as which functional purposes are served by the ability to perceive time. Studies have demonstrated that many species of animals, including both vertebrates and invertebrates, have cognitive abilities that allow them to estimate and compare time intervals and durations in a similar way to humans. There is empirical evidence that metabolic rate has an impact on animals' ability to perceive time. In general, it is true within and across taxa that animals of smaller size (such as flies), which have a fast metabolic rate, experience time more slowly than animals of larger size, which have a slow metabolic rate. Researchers suppose that this could be the reason why small-bodied animals are generally better at perceiving time on a small scale, and why they are more agile than larger animals. Time perception in vertebrates Examples in fish In a lab experiment, goldfish were conditioned to receive a light stimulus followed shortly by an aversive electric shock, with a constant time interval between the two stimuli. Test subjects showed an increase in general activity around the time of the electric shock. This response persisted in further trials in which the light stimulus was kept but the electric shock was removed. This suggests that goldfish are able to perceive time intervals and to initiate an avoidance response at the time when they expect the distressing stimulus to happen. In two separate studies, golden shiners and dwarf inangas demonstrated the ability to associate the availability of food sources to specific locations and times of day, called time-place learning. In contrast, when tested for time-place learning based on predation risk, inangas were unable to associate spatiotemporal patterns to the presence or absence of predators. In June 2022, researchers reported in Physical Review Letters that salamanders were demonstrating counter-intuitive responses to the arrow of time in how their eyes perceived different stimuli. Examples in birds When presented with the choice between obtaining food at regular intervals (with a fixed delay between feedings) or at stochastic intervals (with a variable delay between feedings), starlings can discriminate between the two types of intervals and consistently prefer getting food at variable intervals. This is true whether the total amount of food is the same for both options or if the total amount of food is unpredictable in the variable option. This suggests that starlings have an inclination for risk-prone behavior. Pigeons are able to discriminate between different times of day and show time-place learning. After training, lab subjects were successfully able to peck specific keys at different times of day (morning or afternoon) in exchange for food, even after their sleep/wake cycle was artificially shifted. This suggests that to discriminate between different times of day, pigeons can use an internal timer (or circadian timer) that is independent of external cues. However, a more recent study on time-place learning in pigeons suggests that for a similar task, test subjects will switch to a non-circadian timing mechanism when possible to save energy resources. Experimental tests revealed that pigeons are also able to discriminate between cues of various durations (on the order of seconds), but that they are less accurate when timing auditory cues than when timing visual cues. Examples in mammals A study on privately owned dogs revealed that dogs are able to perceive durations ranging from minutes to several hours differently. Dogs reacted with increasing intensity to the return of their owners when they were left alone for longer durations, regardless of the owners' behavior. After being trained with food reinforcement, female wild boars are able to correctly estimate time intervals of days by asking for food at the end of each interval, but they are unable to accurately estimate time intervals of minutes with the same training method. When trained with positive reinforcement, rats can learn to respond to a signal of a certain duration, but not to signals of shorter or longer durations, which demonstrates that they can discriminate between different durations. Rats have demonstrated time-place learning, and can also learn to infer correct timing for a specific task by following an order of events, suggesting that they might be able to use an ordinal timing mechanism. Like pigeons, rats are thought to have the ability to use a circadian timing mechanism for discriminating time of day. Time perception in invertebrates When returning to the hive with nectar, forager honey bees need to know the current ratio of nectar-collecting to nectar-processing rates in the colony. To do so, they estimate the time it takes them to find a food-storer bee, which will unload the forage and store it. The longer it takes them to find one, the busier the food-storer bees are, and therefore the higher the nectar-collecting rate of the colony. Forager bees also assess the quality of nectar by comparing the length of time it takes to unload the forage: a longer unloading time indicates higher quality nectar. They compare their own unloading time to the unloading time of other foragers present in the hive, and adjust their recruiting behavior accordingly. For instance, honey bees reduce the duration of their waggle dance if they judge their own yield to be inferior. Scientists have demonstrated that anesthesia disrupts the circadian clock and impairs the time perception of honey bees, as observed in humans. Experiments revealed that a six-hour-long general anesthesia significantly delayed the start of the foraging behaviour of honeybees if induced during daytime, but not if induced during nighttime. Bumble bees can be successfully trained to respond to a stimulus after a certain time interval has elapsed (usually several seconds after the start signal). Studies have shown that they can also learn to simultaneously time multiple interval durations. In a single study, colonies from three species of ants from the genus Myrmica were trained to associate feeding sessions with different times. The trainings lasted several days, where each day the feeding time was delayed by 20 minutes compared to the previous day. In all three species, at the end of the training, most individuals were present at the feeding spot at the correct expected times, suggesting that ants are able to estimate the time running, keep in memory the expected feeding time and to act anticipatively. Types of temporal illusions A temporal illusion is a distortion in the perception of time. For example: estimating time intervals, e.g., "When did you last see your primary care physician?"; estimating time duration, e.g., "How long were you waiting at the doctor's office?"; and judging the simultaneity of events (see below for examples). Main types of temporal illusions Telescoping effect: People tend to recall recent events as occurring further back in time than they actually did (backward telescoping) and distant events as occurring more recently than they actually did (forward telescoping). Vierordt's law: Shorter intervals tend to be overestimated while longer intervals tend to be underestimated Time intervals associated with more changes may be perceived as longer than intervals with fewer changes Perceived temporal length of a given task may shorten with greater motivation Perceived temporal length of a given task may stretch when broken up or interrupted Auditory stimuli may appear to last longer than visual stimuli Time durations may appear longer with greater stimulus intensity (e.g., auditory loudness or pitch) Simultaneity judgments can be manipulated by repeated exposure to non-simultaneous stimuli Kappa effect The Kappa effect or perceptual time dilation is a form of temporal illusion verifiable by experiment. The temporal duration between a sequence of consecutive stimuli is thought to be relatively longer or shorter than its actual elapsed time, due to the spatial/auditory/tactile separation between each consecutive stimuli. The kappa effect can be displayed when considering a journey made in two parts that each take an equal amount of time. When mentally comparing these two sub-journeys, the part that covers more distance may appear to take longer than the part covering less distance, even though they take an equal amount of time. Eye movements and chronostasis The perception of space and time undergoes distortions during rapid saccadic eye movements. Chronostasis is a type of temporal illusion in which the first impression following the introduction of a new event or task demand to the brain appears to be extended in time. For example, chronostasis temporarily occurs when fixating on a target stimulus, immediately following a saccade (e.g., quick eye movement). This elicits an overestimation in the temporal duration for which that target stimulus (i.e., postsaccadic stimulus) was perceived. This effect can extend apparent durations by up to 500 ms and is consistent with the idea that the visual system models events prior to perception. The most well-known version of this illusion is known as the stopped-clock illusion, wherein a subject's first impression of the second-hand movement of an analog clock, subsequent to one's directed attention (i.e., saccade) to the clock, is the perception of a slower-than-normal second-hand movement rate (the second-hand of the clock may seemingly temporarily freeze in place after initially looking at it). The occurrence of chronostasis extends beyond the visual domain into the auditory and tactile domains. In the auditory domain, chronostasis and duration overestimation occur when observing auditory stimuli. One common example is a frequent occurrence when making telephone calls. If, while listening to the phone's dial tone, research subjects move the phone from one ear to the other, the length of time between rings appears longer. In the tactile domain, chronostasis has persisted in research subjects as they reach for and grasp objects. After grasping a new object, subjects overestimate the time in which their hand has been in contact with this object. Flash-lag effect In an experiment, participants were told to stare at an "x" symbol on a computer screen whereby a moving blue doughnut-like ring repeatedly circled the fixed "x" point. Occasionally, the ring would display a white flash for a split second that physically overlapped the ring's interior. However, when asked what was perceived, participants responded that they saw the white flash lagging behind the center of the moving ring. In other words, despite the reality that the two retinal images were actually spatially aligned, the flashed object was usually observed to trail a continuously moving object in space — a phenomenon referred to as the flash-lag effect. The first proposed explanation, called the "motion extrapolation" hypothesis, is that the visual system extrapolates the position of moving objects but not flashing objects when accounting for neural delays (i.e., the lag time between the retinal image and the observer's perception of the flashing object). The second proposed explanation by David Eagleman and Sejnowski, called the "latency difference" hypothesis, is that the visual system processes moving objects at a faster rate than flashed objects. In the attempt to disprove the first hypothesis, David Eagleman conducted an experiment in which the moving ring suddenly reverses direction to spin in the other way as the flashed object briefly appears. If the first hypothesis were correct, we would expect that, immediately following reversal, the moving object would be observed as lagging behind the flashed object. However, the experiment revealed the opposite — immediately following reversal, the flashed object was observed as lagging behind the moving object. This experimental result supports the "latency difference" hypothesis. A recent study tries to reconcile these different approaches by treating perception as an inference mechanism aiming at describing what is happening at the present time. Oddball effect Humans typically overestimate the perceived duration of the initial and final event in a stream of identical events. This oddball effect may serve an evolutionarily adapted "alerting" function and is consistent with reports of time slowing down in threatening situations. The effect seems to be strongest for images that are expanding in size on the retina, , that are "looming" or approaching the viewer, and the effect can be eradicated for oddballs that are contracting or perceived to be receding from the viewer. The effect is also reduced or reversed with a static oddball presented among a stream of expanding stimuli. Initial studies suggested that this oddball-induced "subjective time dilation" expanded the perceived duration of oddball stimuli by 30–50% but subsequent research has reported more modest expansion of around 10% or less. The direction of the effect, whether the viewer perceives an increase or a decrease in duration, also seems to be dependent upon the stimulus used. Reversal of temporal order judgment Numerous experimental findings suggest that temporal order judgments of actions preceding effects can be reversed under special circumstances. Experiments have shown that sensory simultaneity judgments can be manipulated by repeated exposure to non-simultaneous stimuli. In an experiment conducted by David Eagleman, a temporal order judgment reversal was induced in subjects by exposing them to delayed motor consequences. In the experiment, subjects played various forms of video games. Unknown to the subjects, the experimenters introduced a fixed delay between the mouse movements and the subsequent sensory feedback. For example, a subject may not see a movement register on the screen until 150 milliseconds after they had moved the mouse. Participants playing the game quickly adapted to the delay and felt as though there was less delay between their mouse movement and the sensory feedback. Shortly after the experimenters removed the delay, the subjects commonly felt as though the effect on the screen happened just before they commanded it. This work addresses how the perceived timing of effects is modulated by expectations, and the extent to which such predictions are quickly modifiable. In an experiment conducted by Haggard and colleagues in 2002, participants pressed a button that triggered a flash of light at a distance, after a slight delay of 100 milliseconds. By repeatedly engaging in this act, participants had adapted to the delay (i.e., they experienced a gradual shortening in the perceived time interval between pressing the button and seeing the flash of light). The experimenters then showed the flash of light instantly after the button was pressed. In response, subjects often thought that the flash (the effect) had occurred before the button was pressed (the cause). Additionally, when the experimenters slightly reduced the delay, and shortened the spatial distance between the button and the flash of light, participants had often claimed again to have experienced the effect before the cause. Several experiments also suggest that temporal order judgment of a pair of tactile stimuli delivered in rapid succession, one to each hand, is noticeably impaired (i.e., misreported) by crossing the hands over the midline. However, congenitally blind subjects showed no trace of temporal order judgment reversal after crossing the arms. These results suggest that tactile signals taken in by the congenitally blind are ordered in time without being referred to a visuospatial representation. Unlike the congenitally blind subjects, the temporal order judgments of the late-onset blind subjects were impaired when crossing the arms to a similar extent as non-blind subjects. These results suggest that the associations between tactile signals and visuospatial representation is maintained once it is accomplished during infancy. Some research studies have also found that the subjects showed reduced deficit in tactile temporal order judgments when the arms were crossed behind their back than when they were crossed in front. Physiological associations Tachypsychia Tachypsychia is a neurological condition that alters the perception of time, usually induced by physical exertion, drug use, or a traumatic event. For someone affected by tachypsychia, time perceived by the individual either lengthens, making events appear to slow down, or contracts, with objects appearing as moving in a speeding blur. Effects of emotional states Awe Research has suggested the feeling of awe has the ability to expand one's perceptions of time availability. Awe can be characterized as an experience of immense perceptual vastness that coincides with an increase in focus. Consequently, it is conceivable that one's temporal perception would slow down when experiencing awe. The perception of time can differ as people choose between savoring moments and deferring gratification. Fear Possibly related to the oddball effect, research suggests that time seems to slow down for a person during dangerous events (such as a car accident, a robbery, or when a person perceives a potential predator or mate), or when a person skydives or bungee jumps, where they are capable of complex thoughts in what would normally be the blink of an eye (See Fight-or-flight response). This reported slowing in temporal perception may have been evolutionarily advantageous because it may have enhanced one's ability to intelligibly make quick decisions in moments that were of critical importance to our survival. However, even though observers commonly report that time seems to have moved in slow motion during these events, it is unclear whether this is a function of increased time resolution during the event, or instead an illusion created by the remembering of an emotionally salient event. A strong time dilation effect has been reported for perception of objects that were looming, but not of those retreating, from the viewer, suggesting that the expanding discs — which mimic an approaching object — elicit self-referential processes which act to signal the presence of a possible danger. Anxious people, or those in great fear, experience greater "time dilation" in response to the same threat stimuli due to higher levels of epinephrine, which increases brain activity (an adrenaline rush). In such circumstances, an illusion of time dilation could assist an effective escape. When exposed to a threat, three-year-old children were observed to exhibit a similar tendency to overestimate elapsed time. Research suggests that the effect appears only at the point of retrospective assessment, rather than occurring simultaneously with events as they happened. Perceptual abilities were tested during a frightening experience — a free fall — by measuring people's sensitivity to flickering stimuli. The results showed that the subjects' temporal resolution was not improved as the frightening event was occurring. Events appear to have taken longer only in retrospect, possibly because memories were being more densely packed during the frightening situation. Other researchers suggest that additional variables could lead to a different state of consciousness in which altered time perception does occur during an event. Research does demonstrate that visual sensory processing increases in scenarios involving action preparation. Participants demonstrated a higher detection rate of rapidly presented symbols when preparing to move, as compared to a control without movement. People shown extracts from films known to induce fear often overestimated the elapsed time of a subsequently presented visual stimulus, whereas people shown emotionally neutral clips (weather forecasts and stock market updates) or those known to evoke feelings of sadness showed no difference. It is argued that fear prompts a state of arousal in the amygdala, which increases the rate of a hypothesized "internal clock". This could be the result of an evolved defensive mechanism triggered by a threatening situation. Individuals experiencing sudden or surprising events, real or imagined (e.g., witnessing a crime, or believing one is seeing a ghost), may overestimate the duration of the event. Changes with age Psychologists have found that the subjective perception of the passing of time tends to speed up with increasing age in humans. This often causes people to increasingly underestimate a given interval of time as they age. This fact can likely be attributed to a variety of age-related changes in the aging brain, such as the lowering in dopaminergic levels with older age; however, the details are still being debated. Very young children will first experience the passing of time when they can subjectively perceive and reflect on the unfolding of a collection of events. A child's awareness of time develops during childhood, when the child's attention and short-term memory capacities form — this developmental process is thought to be dependent on the slow maturation of the prefrontal cortex and hippocampus. The common explanation is that most external and internal experiences are new for young children but repetitive for adults. Children have to be extremely engaged (i.e. dedicate many neural resources or significant brain power) in the present moment because they must constantly reconfigure their mental models of the world to assimilate it and manage behaviour properly. Adults, however, may rarely need to step outside mental habits and external routines. When an adult frequently experiences the same stimuli, such stimuli may seem "invisible" as a result of having already been sufficiently mapped by the brain. This phenomenon is known as neural adaptation. According to this picture, the rate of new stimuli and new experiences may decrease with age as does the number of new memories created to record them. If one then assumes that the perceived duration of a given interval of time is linked to how many new memories are formed during it, the aging adult may underestimate long stretches of time because, in their recollection, these now contain fewer memory-creating events. Consequently, the subjective perception is often that time passes by at a faster rate with age. Proportional to the real time Let be subjective time, be real time, and define both to be zero at birth. One model proposes that the passage of subjective time relative to actual time is inversely proportional to real time: When solved, . One day would be approximately 1/4,000 of the life of an 11-year-old, but approximately 1/20,000 of the life of a 55-year-old. This helps to explain why a random, ordinary day may therefore appear longer for a young child than an adult. So a year would be experienced by a 55-year-old as passing approximately five times more quickly than a year experienced by an 11-year-old. If long-term time perception is based solely on the proportionality of a person's age, then the following four periods in life would appear to be quantitatively equal: ages 5–10 (1x), ages 10–20 (2x), ages 20–40 (4x), age 40–80 (8x), as the end age is twice the start age. However, this does not work for ages 0–10, which corresponds to ages 10–∞. Proportional to the subjective time Lemlich posits that the passage of subjective time relative to actual time is inversely proportional to total subjective time, rather than the total real time: When mathematically solved, It avoids the issue of infinite subjective time passing from real age 0 to 1 year, as the asymptote can be integrated in an improper integral. Using the boundary conditions S = 0 when R = 0 and K > 0, This means that time appears to pass in proportion to the square root of the perceiver's real age, rather than directly proportional. Under this model, a 55-year-old would subjectively experience time passing times more quickly than an 11-year-old, rather than five times under the previous. This means the following periods in life would appear to be quantitatively equal: ages 0–1, 1–4, 4–9, 9–16, 16–25, 25–36, 36–49, 49–64, 64–81, 81–100, 100–121. In a study, participants consistently provided answers that fit this model when asked about time perception at 1/4 of their age, but were less consistent for 1/2 of their age. Their answers suggest that this model is more accurate than the previous one. A consequence of this model is that the fraction of subjective life remaining is always less than the fraction of real life remaining, but it is always more than one half of real life remaining. This can be seen for and : Effects of drugs on time perception Stimulants such as thyroxine, caffeine, and amphetamines lead to overestimation of time intervals by both humans and rats, while depressants and anesthetics such as barbiturates and nitrous oxide can have the opposite effect and lead to underestimation of time intervals. The level of activity in the brain of neurotransmitters such as dopamine and norepinephrine may be the reason for this. A research on stimulant-dependent individuals (SDI) showed several abnormal time processing characteristics including larger time differences for effective duration discrimination, and overestimating the duration of a relatively long time interval. Altered time processing and perception in SDI could explain the difficulty SDI have with delaying gratification. Another research studied the dose-dependent effect in methamphetamine dependents with short term abstinence and its effects on time perception. Results shows that motor timing but not perceptual timing, was altered in meth dependents, which persisted for at least three months of abstinence. Dose-dependent effects on time perception were only observed when short-term abstinent meth abusers processed long time intervals. The study concluded that time perception alteration in meth dependents is task specific and dose dependent. The effect of cannabis on time perception has been studied with inconclusive results mainly due to methodological variations and the paucity of research. Even though 70% of time estimation studies report over-estimation, the findings of time production and time reproduction studies remain inconclusive. Studies show consistently throughout the literature that most cannabis users self-report the experience of a slowed perception of time. In the laboratory, researchers have confirmed the effect of cannabis on the perception of time in both humans and animals. Using PET scans it was observed that participants who showed a decrease in cerebellar blood flow (CBF) also had a significant alteration in time sense. The relationship between decreased CBF and impaired time sense is of interest as the cerebellum is linked to an internal timing system. Effects of body temperature The chemical clock hypothesis implies a causal link between body temperature and the perception of time. Past work show that increasing body temperature tends to make individuals experience a dilated perception of time and they perceive durations as shorter than they actually were, ultimately leading them to underestimate time durations. While decreasing body temperature has the opposite effect – causing participants to experience a condensed perception of time leading them to over-estimate time duration – observations of the latter type were rare. Research establishes a parametric effect of body temperature on time perception with higher temperatures generally producing faster subjective time and vice versa. This is especially seen to be true under changes in arousal levels and stressful events. Applications Since subjective time is measurable, through information such as heartbeats or actions taken within a time period, there are analytical applications for time perception. Social networks Time perception can be used as a tool in social networks to define the subjective experiences of each node within a system. This method can be used to study characters' psychology in dramas, both film and literature, analyzed by social networks. Each character's subjective time may be calculated, with methods as simple as word counting, and compared to the real time of the story to shed light on their internal states. See also Arrow of time Time dilation Déjà vu Dyschronometria Benjamin Libet Temporal resolution Time perspective (Wikiversity) References Further reading External links Time perception research at the University of Manchester Time Sense: Polychronicity and Monochronicity "A Cognitive Model of Retrospective Duration Estimations", Hee-Kyung Ahn, et al., March 7, 2006. "Time, Force, Motion, and the Semantics of Natural Languages", Wolfgang Wildgen, Antwerp Papers in Linguistics, 2003/2004. Can Time Slow Down? "Interactions emerge between biological clocks", The Pharmaceutical Journal, Vol 275 No 7376 p644, 19 November 2005 Registration required. Picture Space Time helps to add Time Perception to Photographs using sound Cognition Concepts in the philosophy of mind Metaphysics of mind Perception Philosophy of science Philosophy of time Concepts in metaphysics Thought Perception Time management Unsolved problems in neuroscience Unsolved problems in physics
Time perception
Physics
6,803
56,066,942
https://en.wikipedia.org/wiki/Disc%20cutting%20lathe
A disc cutting lathe is a device used to transfer an audio signal to the modulated spiral groove of a blank master disc for the production of phonograph records. Disc cutting lathes were also used to produce broadcast transcription discs and for direct-to-disc recording. Overview Disc cutting lathes utilize an audio signal, sent through a cutting amplifier to the cutter head, which controls the cutting stylus. The cutting stylus engraves a modulated spiral groove corresponding to the audio signal into the lacquer coating of the master disc. The direct metal mastering (DMM) process uses a copper-coated rather than lacquer-coated disc. Before lacquer discs, master recordings were cut into blank wax discs. Once complete, this master disc is used to produce matrices from which the record is pressed. For all intents and purposes, the finished record is a facsimile of this master disc. History Prior to the success of Western Electric's "Westrex" system, master discs were produced acoustically and without electricity. In 1921, John J. Scully, a former Columbia Phonograph Company employee, designed and built a weight-driven lathe specifically designed for use by phonograph manufacturers. The first Scully lathe was sold to Cameo Records. John's son, Lawrence, founded Scully Recording Instruments. In 1924, Western Electric purchased a Scully weight-driven lathe to demonstrate their "Westrex" cutter head and electronics for both the Columbia Phonograph Company and Victor Talking Machine Company. Both companies began using the Westrex system for recording sessions in 1925 after agreeing to license the system from Western Electric. In 1931, German manufacturer Georg Neumann & Co. introduced the AM31 disc-cutting lathe, which employed a direct-drive design. Two years later, Neumann introduced a portable lathe capable of making recordings on location. Imports of Neumann lathes into the United States were restricted, however, and Neumann lathes were not imported to the United States until the 1960s. Scully dominated the U.S. marketplace for professional recording lathes from the 1930s to the 1960s, and almost all American lacquer masters were cut using a Scully lathe, often fitted with the Westrex cutter head and electronics. In 1947, the Presto 1D, Fairchild 542, and Cook feedback cutters represented major improvements in disc-cutting technology. In 1950 Scully Recording Instruments introduced a disc cutting lathe with variable pitch, which made it possible to vary the width of the grooves (i.e. the pitch) of a master disc, simultaneously conserving the available recording space of the disc while preserving the dynamics and fidelity of the recorded material. Five years later, the company introduced automation for this variable pitch feature. In 1957, Westrex demonstrated the first commercial "45/45" stereo cutter head. In 1966, Neumann introduced the VMS66, followed by the VMS70 (1970) and the VMS80 (1980), which introduced variable pitch to Neumann's offerings, reducing speed fluctuations to achieve smoother sound and extended dynamic range. Unlike other systems, Neumann's disc cutting system was complete and included the lathe, cutter head, and electronics. References External links Audio Record Magazine: Quick Facts On Disc Recorders (October 1952) Sound recording technology
Disc cutting lathe
Technology
676
42,174,704
https://en.wikipedia.org/wiki/Pycnoporus%20coccineus
Pycnoporus coccineus is a saprophytic, white-rot decomposer fungus in the family Polyporaceae. A widely distributed species, the fungus was first described scientifically by Elias Magnus Fries in 1851. A study conducted by Couturier al et. (2015) concluded that the combined analysis of sugar and solid residues showed the suitability of P. coccineus secreted enzymes for softwood degradation. P. coccineus is a promising model to better understand the challenges of softwood biomass deconstruction and its use in biorefinery processes. References External links Aboriginal use of fungi, from Australian National Botanic Gardens Fungi described in 1851 Fungi of Asia Fungi of Australia Fungi of Europe Fungi of New Zealand Fungi of North America Polyporaceae Taxa named by Elias Magnus Fries Fungus species
Pycnoporus coccineus
Biology
174
1,156,819
https://en.wikipedia.org/wiki/Population%20ecology
Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment, such as birth and death rates, and by immigration and emigration. The discipline is important in conservation biology, especially in the development of population viability analysis which makes it possible to predict the long-term probability of a species persisting in a given patch of habitat. Although population ecology is a subfield of biology, it provides interesting problems for mathematicians and statisticians who work in population dynamics. History In the 1940s, ecology was divided into autecology—the study of individual species in relation to the environment—and synecology—the study of groups of species in relation to the environment. The term autecology (from Ancient Greek: αὐτο, aúto, "self"; οίκος, oíkos, "household"; and λόγος, lógos, "knowledge"), refers to roughly the same field of study as concepts such as life cycles and behaviour as adaptations to the environment by individual organisms. Eugene Odum, writing in 1953, considered that synecology should be divided into population ecology, community ecology and ecosystem ecology, renaming autecology as 'species ecology' (Odum regarded "autecology" as an archaic term), thus that there were four subdivisions of ecology. Terminology A population is defined as a group of interacting organisms of the same species. A demographic structure of a population is how populations are often quantified. The total number of individuals in a population is defined as a population size, and how dense these individuals are is defined as population density. There is also a population's geographic range, which has limits that a species can tolerate (such as temperature). Population size can be influenced by the per capita population growth rate (rate at which the population size changes per individual in the population.) Births, deaths, emigration, and immigration rates all play a significant role in growth rate. The maximum per capita growth rate for a population is known as the intrinsic rate of increase. In a population, carrying capacity is known as the maximum population size of the species that the environment can sustain, which is determined by resources available. In many classic population models, r is represented as the intrinsic growth rate, where K is the carrying capacity, and N0 is the initial population size. Population dynamics The development of population ecology owes much to the mathematical models known as population dynamics, which were originally formulae derived from demography at the end of the 18th and beginning of 19th century. The beginning of population dynamics is widely regarded as the work of Malthus, formulated as the Malthusian growth model. According to Malthus, assuming that the conditions (the environment) remain constant (ceteris paribus), a population will grow (or decline) exponentially. This principle provided the basis for the subsequent predictive theories, such as the demographic studies such as the work of Benjamin Gompertz and Pierre François Verhulst in the early 19th century, who refined and adjusted the Malthusian demographic model. A more general model formulation was proposed by F. J. Richards in 1959, further expanded by Simon Hopkins, in which the models of Gompertz, Verhulst and also Ludwig von Bertalanffy are covered as special cases of the general formulation. The Lotka–Volterra predator-prey equations are another famous example, as well as the alternative Arditi–Ginzburg equations. Exponential vs. logistic growth When describing growth models, there are two main types of models that are most commonly used: exponential and logistic growth. When the per capita rate of increase takes the same positive value regardless of population size, the graph shows exponential growth. Exponential growth takes on the assumption that there is unlimited resources and no predation. An example of exponential population growth is that of the Monk Parakeets in the United States. Originally from South America, Monk Parakeets were either released or escaped from people who owned them. These birds experienced exponential growth from the years 1975-1994 and grew about 55 times their population size from 1975. This growth is likely due to reproduction within their population, as opposed to the addition of more birds from South America (Van Bael & Prudet-Jones 1996). When the per capita rate of increase decreases as the population increases towards the maximum limit, or carrying capacity, the graph shows logistic growth. Environmental and social variables, along with many others, impact the carrying capacity of a population, meaning that it has the ability to change (Schacht 1980). Fisheries and wildlife management In fisheries and wildlife management, population is affected by three dynamic rate functions. Natality or birth rate, often recruitment, which means reaching a certain size or reproductive stage. Usually refers to the age a fish can be caught and counted in nets. Population growth rate, which measures the growth of individuals in size and length. More important in fisheries, where population is often measured in biomass. Mortality, which includes harvest mortality and natural mortality. Natural mortality includes non-human predation, disease and old age. If N1 is the number of individuals at time 1 then where N0 is the number of individuals at time 0, B is the number of individuals born, D the number that died, I the number that immigrated, and E the number that emigrated between time 0 and time 1. If we measure these rates over many time intervals, we can determine how a population's density changes over time. Immigration and emigration are present, but are usually not measured. All of these are measured to determine the harvestable surplus, which is the number of individuals that can be harvested from a population without affecting long-term population stability or average population size. The harvest within the harvestable surplus is termed "compensatory" mortality, where the harvest deaths are substituted for the deaths that would have occurred naturally. Harvest above that level is termed "additive" mortality, because it adds to the number of deaths that would have occurred naturally. These terms are not necessarily judged as "good" and "bad," respectively, in population management. For example, a fish & game agency might aim to reduce the size of a deer population through additive mortality. Bucks might be targeted to increase buck competition, or does might be targeted to reduce reproduction and thus overall population size. For the management of many fish and other wildlife populations, the goal is often to achieve the largest possible long-run sustainable harvest, also known as maximum sustainable yield (or MSY). Given a population dynamic model, such as any of the ones above, it is possible to calculate the population size that produces the largest harvestable surplus at equilibrium. While the use of population dynamic models along with statistics and optimization to set harvest limits for fish and game is controversial among some scientists, it has been shown to be more effective than the use of human judgment in computer experiments where both incorrect models and natural resource management students competed to maximize yield in two hypothetical fisheries. To give an example of a non-intuitive result, fisheries produce more fish when there is a nearby refuge from human predation in the form of a nature reserve, resulting in higher catches than if the whole area was open to fishing. r/K selection An important concept in population ecology is the r/K selection theory. For example, if an animal has the choice of producing one or a few offspring, or to put a lot of effort or little effort in offspring—these are all examples of trade-offs. In order for species to thrive, they must choose what is best for them, leading to a clear distinction between r and K selected species. The first variable is r (the intrinsic rate of natural increase in population size, density independent) and the second variable is K (the carrying capacity of a population, density dependent). It is important to understand the difference between density-independent factors when selecting the intrinsic rate and density-dependent for the selection of the carrying capacity. Carrying capacity is only found during a density-dependent population. Density-dependent factors influence the carrying capacity are predation, harvest, and genetics, so when selecting the carrying capacity it is important to understand to look at the predation or harvest rates that influence the population (Stewart 2004). An r-selected species (e.g., many kinds of insects, such as aphids) is one that has high rates of fecundity, low levels of parental investment in the young, and high rates of mortality before individuals reach maturity. Evolution favors productivity in r-selected species. In contrast, a K-selected species (such as humans) has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Evolution in K-selected species favors efficiency in the conversion of more resources into fewer offspring. K-selected species generally experience stronger competition, where populations generally live near carrying capacity. These species have heavy investment in offspring, resulting in longer lived organisms, and longer period of maturation. Offspring of K-selected species generally have a higher probability of survival, due to heavy parental care and nurturing. Offspring Quality The offspring fitness is mainly affected by the size and quality of that specific offspring [depending on the species]. Factors that contribute to the relative fitness of offspring size is either the resources the parents provide to their young or morphological traits that come from the parents. The overall success of the offspring after the initial birth or hatching is the survival of the young, the growth rate, and the birthing success of the offspring. There is found to be no effect of the young being raised by the natural parents or foster parents, the offspring need the proper resources to survive (Kristi 2010). A study that was conducted on the egg size and offspring quality in birds found that, in summary, that the egg size contributes to the overall fitness of the offspring. This study shows the direct relationship to the survivorship curve Type I in that if the offspring is cared for during its early stages of life by a parent, it will die off later in life. However, if the offspring is not cared for by the parents due to an increase in egg quantity, then the survivorship curve will be similar to Type III, in that the offspring will die off early and will survive later in life. Top-down and bottom-up controls Top-down controls In some populations, organisms in lower trophic levels are controlled by organisms at the top. This is known as top-down control. For example, the presence of top carnivores keep herbivore populations in check. If there were no top carnivores in the ecosystem, then herbivore populations would rapidly increase, leading to all plants being eaten. This ecosystem would eventually collapse. Bottom-up controls Bottom-up controls, on the other hand, are driven by producers in the ecosystem. If plant populations change, then the population of all species would be impacted. For example, if plant populations decreased significantly, the herbivore populations would decrease, which would lead to a carnivore population decreasing too. Therefore, if all of the plants disappeared, then the ecosystem would collapse. Another example would be if there were too many plants available, then two herbivore populations may compete for the same food. The competition would lead to an eventual removal of one population. Do all ecosystems have to be either top-down or bottom-up? An ecosystem does not have to be either top-down or bottom-up. There are occasions where an ecosystem could be bottom-up sometimes, such as a marine ecosystem, but then have periods of top-down control due to fishing. Survivorship curves Survivorship curves are graphs that show the distribution of survivors in a population according to age. Survivorship curves play an important role in comparing generations, populations, or even different species. A Type I survivorship curve is characterized by the fact that death occurs in the later years of an organism's life (mostly mammals). In other words, most organisms reach the maximum expected lifespan and the life expectancy and the age of death go hand-in-hand (Demetrius 1978). Typically, Type I survivorship curves characterize K-selected species. Type II survivorship shows that death at any age is equally probable. This means that the chances of death are not dependent on or affected by the age of that organism. Type III curves indicate few surviving the younger years, but after a certain age, individuals are much more likely to survive. Type III survivorship typically characterizes r-selected species. Metapopulation Populations are also studied and conceptualized through the "metapopulation" concept. The metapopulation concept was introduced in 1969: "as a population of populations which go extinct locally and recolonize." Metapopulation ecology is a simplified model of the landscape into patches of varying levels of quality. Patches are either occupied or they are not. Migrants moving among the patches are structured into metapopulations either as sources or sinks. Source patches are productive sites that generate a seasonal supply of migrants to other patch locations. Sink patches are unproductive sites that only receive migrants. In metapopulation terminology there are emigrants (individuals that leave a patch) and immigrants (individuals that move into a patch). Metapopulation models examine patch dynamics over time to answer questions about spatial and demographic ecology. An important concept in metapopulation ecology is the rescue effect, where small patches of lower quality (i.e., sinks) are maintained by a seasonal influx of new immigrants. Metapopulation structure evolves from year to year, where some patches are sinks, such as dry years, and become sources when conditions are more favorable. Ecologists utilize a mixture of computer models and field studies to explain metapopulation structure. Metapopulation ecology allows for ecologists to take in a wide range of factors when examining a metapopulation like genetics, the bottle-neck effect, and many more. Metapopulation data is extremely useful in understanding population dynamics as most species are not numerous and require specific resources from their habitats. In addition, metapopulation ecology allows for a deeper understanding of the effects of habitat loss, and can help to predict the future of a habitat. To elaborate, metapopulation ecology assumes that, before a habitat becomes uninhabitable, the species in it will emigrate out, or die off. This information is helpful to ecologists in determining what, if anything, can be done to aid a declining habitat. Overall, the information that metapopulation ecology provides is useful to ecologists in many ways (Hanski 1998). Journals The first journal publication of the Society of Population Ecology, titled Population Ecology (originally called Researches on Population Ecology) was released in 1952. Scientific articles on population ecology can also be found in the Journal of Animal Ecology, Oikos and other journals. See also Density-dependent inhibition Ecological overshoot Irruptive growth Lists of organisms by population Overpopulation Population density Population distribution Population dynamics Population dynamics of fisheries Population genetics Population growth Theoretical ecology References Further reading Bibliography Applied statistics Ecology
Population ecology
Mathematics
3,086
27,902,531
https://en.wikipedia.org/wiki/Duncan%27s%20taxonomy
Duncan's taxonomy is a classification of computer architectures, proposed by Ralph Duncan in 1990. Duncan suggested modifications to Flynn's taxonomy to include pipelined vector processes. Taxonomy The taxonomy was developed during 1988-1990 and was first published in 1990. Its original categories are indicated below. Synchronous architectures This category includes all the parallel architectures that coordinate concurrent execution in lockstep fashion and do so via mechanisms such as global clocks, central control units or vector unit controllers. Further subdivision of this category is made primarily on the basis of the synchronization mechanism. Pipelined vector processors Pipelined vector processors are characterized by pipelined functional units that accept a sequential stream of array or vector elements, such that different stages in a filled pipeline are processing different elements of the vector at a given time. Parallelism is provided both by the pipelining in individual functional units described above, as well as by operating multiple units of this kind in parallel and by chaining the output of one unit into another unit as input. Vector architectures that stream vector elements into functional units from special vector registers are termed register-to-register architectures, while those that feed functional units from special memory buffers are designated as memory-to-memory architectures. Early examples of register-to-register architectures from the 1960s and early 1970s include the Cray-1 and Fujitsu VP-200, while the Control Data Corporation STAR-100, CDC 205 and the Texas Instruments Advanced Scientific Computer are early examples of memory-to-memory vector architectures. The late 1980s and early 1990s saw the introduction of vector architectures, such as the Cray Y-MP/4 and Nippon Electric Corporation SX-3 that supported 4-10 vector processors with a shared memory (see NEC SX architecture). SIMD This scheme uses the SIMD (single instruction stream, multiple data stream) category from Flynn's taxonomy as a root class for processor array and associative memory subclasses. SIMD architectures are characterized by having a control unit broadcast a common instruction to all processing elements, which execute that instruction in lockstep on diverse operands from local data. Common features include the ability for individual processors to disable an instruction and the ability to propagate instruction results to immediate neighbors over an interconnection network. Processor array Associative memory Systolic array Systolic arrays, proposed during the 1980s, are multiprocessors in which data and partial results are rhythmically pumped from processor to processor through a regular, local interconnection network. Systolic architectures use a global clock and explicit timing delays to synchronize data flow from processor to processor. Each processor in a systolic system executes an invariant sequence of instructions before data and results are pulsed to neighboring processors. MIMD architectures Based on Flynn's multiple-instruction-multiple-data streams terminology, this category spans a wide spectrum of architectures in which processors execute multiple instruction sequences on (potentially) dissimilar data streams without strict synchronization. Although both instruction and data streams can be different for each processor, they need not be. Thus, MIMD architectures can run identical programs that are in various stages at any given time, run unique instruction and data streams on each processor or execute a combination of each these scenarios. This category is subdivided further primarily on the basis of memory organization. Distributed memory Shared memory MIMD-paradigm architectures The MIMD-based paradigms category subsumes systems in which a specific programming or execution paradigm is at least as fundamental to the architectural design as structural considerations are. Thus, the design of dataflow architectures and reduction machines is as much the product of supporting their distinctive execution paradigm as it is a product of connecting processors and memories in MIMD fashion. The category's subdivisions are defined by these paradigms. MIMD/SIMD hybrid Dataflow machine Reduction machine Wavefront array References C Xavier and S S Iyengar, Introduction to Parallel Programming Computer architecture
Duncan's taxonomy
Technology,Engineering
821
51,917,610
https://en.wikipedia.org/wiki/Allyltestosterone
Allyltestosterone, or 17α-allyltestosterone, also known as 17α-allylandrost-4-en-17β-ol-3-one, is a steroid derived from testosterone that was first synthesized in 1936 and was never marketed. Along with propyltestosterone (topterone), it has been patented as a topical antiandrogen and hair growth inhibitor. Allyltestosterone is the parent structure of two marketed 19-nortestosterone progestins, allylestrenol and altrenogest. These progestins are unique among testosterone derivatives in that they appear to be associated with few or no androgenic effects. See also Steroidal antiandrogen List of steroidal antiandrogens Allylnortestosterone Ethinyltestosterone Vinyltestosterone References Abandoned drugs Tertiary alcohols Allyl compounds Androstanes Enones Steroidal antiandrogens
Allyltestosterone
Chemistry
210
5,399,607
https://en.wikipedia.org/wiki/Courtesy%20call
A courtesy call is a call or visit made out of politeness. It is usually done between two parties of high position such as a government official to meet and briefly discuss about important or concerning matters. Diplomacy In diplomacy, a courtesy call is a formal meeting in which a diplomat or representative or a famous person of a nation pays a visit out of courtesy to a head of state or state office holder. Courtesy calls may be paid by another head of state, a prime minister, a minister (Government), or a diplomat. The meeting is usually of symbolic value and rarely involves a detailed discussion of issues. A newly appointed head of mission will usually make a courtesy call to the receiving foreign minister, head of government, and often other dignitaries such as the local mayor. It is also customary for a new head of mission to make courtesy calls to other heads of missions in the capital and often to receive return courtesy calls. Neglecting to pay a courtesy call to missions of smaller countries may result in them resenting the newly arrived mission head. Upon the departure of a head of mission, an additional round of courtesy calls is often expected. Fulfilling this protocol obligation is a time consuming task, with one diplomat noting it took him five months to complete a round in Washington DC. Diplomatic convention states courtesy calls last 20 minutes, which is some cases is excessive with both sides searching frantically for what to say, though some ambassadors consult an encyclopedia prior to the call to prepare talking points. In other cases, in which the meeting sides have joint items to discuss, a call may last an hour or two. Diplomatic personnel are split on the value of courtesy calls, some seeing them as a time wasting tradition while others see them as a means to secure a valuable introduction. In some cases, it is possible to arrange a joint courtesy call by visiting a senior ambassador who will, by prearrangement, assemble his regional colleagues for the meeting. Courtesy calls to cabinet members and members of parliament or congress are important and may lay the foundation for a continuing relationship. In Western democracies, ambassadors will pay calls to leaders of minor and major opposition parties as a change of government may occur at a future point. Such calls are important, and the ambassador must take care to cultivate the opposition without offending the incumbents. Calls to civic dignitaries of major cities, newspaper editors, and trade unions are also performed. Naval Naval courtesy calls were common in the 19th century. The American Great White Fleet paid a series of courtesy calls to ports around the world in a show of American naval strength in 1907–1909. United States Navy regulations require that (upon joining a new ship or station) an officer must make a courtesy call to his new commanding officer or commandant within 48 hours after joining. Business In business, a courtesy call is a visit or call from a company to customers for the purposes of gauging satisfaction or to thank them for their patronage. References State ritual and ceremonies Etiquette
Courtesy call
Biology
595
49,134,790
https://en.wikipedia.org/wiki/Frenkel%E2%80%93Kontorova%20model
The Frenkel–Kontorova (FK) model is a fundamental model of low-dimensional nonlinear physics. The generalized FK model describes a chain of classical particles with nearest neighbor interactions and subjected to a periodic on-site substrate potential. In its original and simplest form the interactions are taken to be harmonic and the potential to be sinusoidal with a periodicity commensurate with the equilibrium distance of the particles. Different choices for the interaction and substrate potentials and inclusion of a driving force may describe a wide range of different physical situations. Originally introduced by Yakov Frenkel and in 1938 to describe the structure and dynamics of a crystal lattice near a dislocation core, the FK model has become one of the standard models in condensed matter physics due to its applicability to describe many physical phenomena. Physical phenomena that can be modeled by FK model include dislocations, the dynamics of adsorbate layers on surfaces, crowdions, domain walls in magnetically ordered structures, long Josephson junctions, hydrogen-bonded chains, and DNA type chains. A modification of the FK model, the Tomlinson model, plays an important role in the field of tribology. The equations for stationary configurations of the FK model reduce to those of the standard map or Chirikov–Taylor map of stochastic theory. In the continuum-limit approximation the FK model reduces to the exactly integrable sine-Gordon (SG) equation, which allows for soliton solutions. For this reason the FK model is also known as the "discrete sine-Gordon" or "periodic Klein–Gordon equation". History A simple model of a harmonic chain in a periodic substrate potential was proposed by Ulrich Dehlinger in 1928. Dehlinger derived an approximate analytical expression for the stable solutions of this model, which he termed , which correspond to what is today called kink pairs. An essentially similar model was developed by Ludwig Prandtl in 1912/13 but did not see publication until 1928. The model was independently proposed by Yakov Frenkel and Tatiana Kontorova in their 1938 article On the theory of plastic deformation and twinning to describe the dynamics of a crystal lattice near a dislocation and to describe crystal twinning. In the standard linear harmonic chain any displacement of the atoms will result in waves, and the only stable configuration will be the trivial one. For the nonlinear chain of Frenkel and Kontorova, there exist stable configurations beside the trivial one. For small atomic displacements the situation resembles the linear chain; however, for large enough displacements, it is possible to create a moving single dislocation, for which an analytical solution was derived by Frenkel and Kontorova. The shape of these dislocations is defined only by the parameters of the system such as the mass and the elastic constant of the springs. Dislocations, also called solitons, are distributed non-local defects and mathematically are a type of topological defect. The defining characteristic of solitons/dislocations is that they behave much like stable particles, they can move while maintaining their overall shape. Two solitons of equal and opposite orientation may cancel upon collision, but a single soliton can not annihilate spontaneously. Generalized model The generalized FK model treats a one-dimensional chain of atoms with nearest-neighbor interaction in periodic on-site potential, the Hamiltonian for this system is where the first term is the kinetic energy of the atoms of mass , and the potential energy is a sum of the potential energy due to the nearest-neighbor interaction and that of the substrate potential: . The substrate potential is periodic, i.e. for some . For non-harmonic interactions and/or non-sinusoidal potential, the FK model will give rise to a commensurate–incommensurate phase transition. The FK model can be applied to any system that can be treated as two coupled sub-systems where one subsystem can be approximated as a linear chain and the second subsystem as a motionless substrate potential. An example would be the adsorption of a layer onto a crystal surface, here the adsorption layer can be approximated as the chain, and the crystal surface as an on-site potential. Classical model In this section we examine in detail the simplest form of the FK model. A detailed version of this derivation can be found in the literature. The model describes a one-dimensional chain of atoms with a harmonic nearest neighbor interaction and subject to a sinusoidal potential. Transverse motion of the atoms is ignored, i.e. the atoms can only move along the chain. The Hamiltonian for this situation is given by , where we specify the interaction potential to be where is the elastic constant, and is the inter-atomic equilibrium distance. The substrate potential is with being the amplitude, and the period. The following dimensionless variables are introduced in order to rewrite the Hamiltonian: In dimensionless form the Hamiltonian is which describes a harmonic chain of atoms of unit mass in a sinusoidal potential of period with amplitude . The equation of motion for this Hamiltonian is We consider only the case where and are commensurate, for simplicity we take . Thus in the ground state of the chain each minimum of the substrate potential is occupied by one atom. We introduce the variable for atomic displacements which is defined by For small displacements the equation of motion may be linearized and takes the following form: This equation of motion describes phonons with with the phonon dispersion relation with the dimensionless wavenumber . This shows that the frequency spectrum of the chain has a band gap with cut-off frequency . The linearised equation of motion are not valid when the atomic displacements are not small, and one must use the nonlinear equation of motion. The nonlinear equations can support new types of localized excitations, which are best illuminated by considering the continuum limit of the FK model. Applying the standard procedure of Rosenau to derive continuum-limit equations from a discrete lattice results in the perturbed sine-Gordon equation where the function describes in first order the effects due to the discreteness of the chain. Neglecting the discreteness effects and introducing reduces the equation of motion to the sine-Gordon (SG) equation in its standard form The SG equation gives rise to three elementary excitations/solutions: kinks, breathers and phonons. Kinks, or topological solitons, can be understood as the solution connecting two nearest identical minima of the periodic substrate potential, thus they are a result of the degeneracy of the ground state. These solutions are where is the topological charge. For the solution is called a kink, and for it is an antikink. The kink width is determined by the kink velocity , where is measured in units of the sound velocity and is . For kink motion with , the width approximates 1. The energy of the kink in dimensionless units is from which the rest mass of the kink follows as , and the kinks rest energy as . Two neighboring static kinks with distance have energy of repulsion whereas kink and antikink attract with interaction A breather is which describes nonlinear oscillation with frequency , with . The breather rest energy For low frequencies the breather can be seen as a coupled kink–antikink pair. Kinks and breathers can move along the chain without any dissipative energy loss. Furthermore, any collision between all the excitations of the SG equation result in only a phase shift. Thus kinks and breathers may be considered nonlinear quasi-particles of the SG model. For nearly integrable modifications of the SG equation such as the continuum approximation of the FK model kinks can be considered deformable quasi-particles, provided that discreetness effects are small. The Peierls–Nabarro potential In the preceding section the excitations of the FK model were derived by considering the model in a continuum-limit approximation. Since the properties of kinks are only modified slightly by the discreteness of the primary model, the SG equation can adequately describe most features and dynamics of the system. The discrete lattice does, however, influence the kink motion in a unique way with the existence of the Peierls–Nabarro (PN) potential , where is the position of the kink's center. The existence of the PN potential is due to the lack of translational invariance in a discrete chain. In the continuum limit the system is invariant for any translation of the kink along the chain. For a discrete chain, only those translations that are an integer multiple of the lattice spacing leave the system invariant. The PN barrier, , is the smallest energy barrier for a kink to overcome so that it can move through the lattice. The value of the PN barrier is the difference between the kink's potential energy for a stable and unstable stationary configuration. The stationary configurations are shown schematically in the figure. References Classical mechanics Lattice models Solitons
Frenkel–Kontorova model
Physics,Materials_science
1,867
45,692,661
https://en.wikipedia.org/wiki/Low-power%20wide-area%20network
A low-power, wide-area network (LPWAN or LPWA network) is a type of wireless telecommunication wide area network designed to allow long-range communication at a low bit rate between IoT devices, such as sensors operated on a battery. Low power, low bit rate, and intended use distinguish this type of network from a wireless WAN that is designed to connect users or businesses, and carry more data, using more power. The LPWAN data rate ranges from 0.3 kbit/s to 50 kbit/s per channel. A LPWAN may be used to create a private wireless sensor network, but may also be a service or infrastructure offered by a third party, allowing the owners of sensors to deploy them in the field without investing in gateway technology. Attributes Range: The operating range of LPWAN technology varies from a few kilometers in urban areas to over 10 km in rural settings. It can also enable effective data communication in previously infeasible indoor and underground locations. Power: LPWAN manufacturers claim years to decades of usable life from built-in batteries, but real-world application tests have not confirmed this. Platforms and technologies Some competing standards and vendors for LPWAN space include: DASH7, a low latency, bi-directional firmware standard that operates over multiple LPWAN radio technologies including LoRa. Wize is an open and royalty-free standard for LPWAN derived from the European Standard Wireless Mbus. Chirp spread spectrum (CSS) based devices. Sigfox, UNB-based technology and French company. LoRa is a proprietary, chirp spread spectrum radio modulation technology for LPWAN used by LoRaWAN, Haystack Technologies, and Symphony Link. MIoTy, implementing Telegram Splitting technology. Weightless is an open standard, narrowband technology for LPWAN used by Ubiik ELTRES, a LPWA technology developed by Sony, with transmission ranges of over 100 km while moving at speeds of 100 km/h. IEEE 802.11ah, also known as Wi-Fi HaLow, is a low-power, wide-area implementation of 802.11 wireless networking standard using sub-gig frequencies. Ultra-narrow band Ultra Narrowband (UNB), modulation technology used for LPWAN by various companies including: Sigfox, French UNB-based technology company. Weightless, a set of communication standards from the Weightless SIG. NB-Fi Protocol, developed by WAVIoT company. Others DASH7 Mode 2 development framework for low power wireless networks, by Haystack Technologies. Runs over many wireless radio standards like LoRa, LTE, 802.15.4g, and others. LTE Advanced for Machine Type Communications (LTE-M), an evolution of LTE communications for connected things by 3GPP. MySensors, DIY Home Automation framework supporting different radios including LoRa. NarrowBand IoT (NB-IoT), standardization effort by 3GPP for a LPWAN used in cellular networks. Random phase multiple access (RPMA) from Ingenu, formerly known as On-Ramp Wireless, is based on a variation of CDMA technology for cellular phones, but uses unlicensed 2.4 GHz spectrum. RPMA is used in GE's AMI metering. Byron, a direct-sequence spread spectrum (DSSS) technology from Taggle Systems in Australia. Wi-SUN, based on IEEE 802.15.4g. See also Internet of things Wide area networks Static Context Header Compression (SCHC) QRP operation Slowfeld Through-the-earth mine communications Short range device IEEE 802.15.4 (Low-power personal-area network) IEEE 802.16 (WiMAX) References Wide area networks Wireless networking
Low-power wide-area network
Technology,Engineering
769
22,349,689
https://en.wikipedia.org/wiki/Contextual%20empiricism
Contextual empiricism is a theory about validating scientific knowledge. It is the view that scientific knowledge is shaped by contextual values as well as constitutive ones. See also Scientific theory Helen Longino References Empiricism Metatheory of science Social epistemology
Contextual empiricism
Technology
60
172,331
https://en.wikipedia.org/wiki/Gene%20pool
The gene pool is the set of all genes, or genetic information, in any population, usually of a particular species. Description A large gene pool indicates extensive genetic diversity, which is associated with robust populations that can survive bouts of intense selection. Meanwhile, low genetic diversity (see inbreeding and population bottlenecks) can cause reduced biological fitness and an increased chance of extinction, although as explained by genetic drift new genetic variants, that may cause an increase in the fitness of organisms, are more likely to fix in the population if it is rather small. When all individuals in a population are identical with regard to a particular phenotypic trait, the population is said to be 'monomorphic'. When the individuals show several variants of a particular trait they are said to be polymorphic. History The Russian geneticist Alexander Sergeevich Serebrovsky first formulated the concept in the 1920s as genofond (gene fund), a word that was imported to the United States from the Soviet Union by Theodosius Dobzhansky, who translated it into English as "gene pool." Gene pool concept in crop breeding Harlan and de Wet (1971) proposed classifying each crop and its related species by gene pools rather than by formal taxonomy. Primary gene pool (GP-1): Members of this gene pool are probably in the same "species" (in conventional biological usage) and can intermate freely. Harlan and de Wet wrote, "Among forms of this gene pool, crossing is easy; hybrids are generally fertile with good chromosome pairing; gene segregation is approximately normal and gene transfer is generally easy.". They also advised subdividing each crop gene pool in two: Subspecies A: Cultivated races Subspecies B: Spontaneous races (wild or weedy) Secondary gene pool (GP-2): Members of this pool are probably normally classified as different species than the crop species under consideration (the primary gene pool). However, these species are closely related and can cross and produce at least some fertile hybrids. As would be expected by members of different species, there are some reproductive barriers between members of the primary and secondary gene pools: hybrids may be weak hybrids may be partially sterile chromosomes may pair poorly or not at all recovery of desired phenotypes may be difficult in subsequent generations However, "The gene pool is available to be utilized, however, if the plant breeder or geneticist is willing to put out the effort required." Tertiary gene pool (GP-3): Members of this gene pool are more distantly related to the members of the primary gene pool. The primary and tertiary gene pools can be intermated, but gene transfer between them is impossible without the use of "rather extreme or radical measures" such as: embryo rescue (or embryo culture, a form of plant organ culture) induced polyploidy (chromosome doubling) bridging crosses (e.g., with members of the secondary gene pool). Gene pool centres Gene pool centres refers to areas on the earth where important crop plants and domestic animals originated. They have an extraordinary range of the wild counterparts of cultivated plant species and useful tropical plants. Gene pool centres also contain different sub tropical and temperate region species. See also Biodiversity Conservation biology Founder effect Gene flow Genetic drift Small population size Australian Grains Genebank References Ecology Conservation biology Selection Genetics concepts Classical genetics Population genetics Evolutionary biology Biorepositories
Gene pool
Biology
687
14,503,823
https://en.wikipedia.org/wiki/Esophageal%20stent
An esophageal stent is a stent (tube) placed in the esophagus to keep a blocked area open so the patient can swallow soft food and liquids. They are effective in the treatment of conditions causing intrinsic esophageal obstruction or external esophageal compression. For the palliative treatment of esophageal cancer most esophageal stents are self-expandable metallic stents. For benign esophageal disease such as refractory esophageal strictures, plastic stents are available. Common complications include chest pain, overgrowth of tissue around the stent and stent migration. Esophageal stents may also be used to staunch the bleeding of esophageal varices. Esophageal stents are placed using endoscopy when after the tip of the endoscope is positioned above the area to be stented, then guidewire is passed through the obstruction into the stomach. The endoscope is withdrawn and using the guidewire with either fluoroscopic or endoscopic guidance the stent is passed down the guidewire to the affected area of the esophagus and deployed. Finally, the guidewire is removed and the stent is left to fully expand over the next 2–3 days. In one study of 997 patients who had self-expanding metal stents for malignant esophageal obstruction it was found that esophageal stents were 95% effective. Pros of Esophageal Stent There are several potential benefits of an esophageal stent procedure: Symptoms relief: stents can help by alleviating symptoms e.g. swallowing, chest pain, and weight loss caused by a narrowed or blocked esophagus. Fast Results: Normally performed in a day and quick recovery. Minor invasive: When using an endoscope, it makes the procedure less invasive than some other treatments. Palliative care: Stents help patients with advanced esophageal cancer by relieving symptoms and improving the quality of life. Alternative to surgery: For older and less healthy patients, an esophageal stent is a viable alternative to surgery, Cons of Esophageal Stent There are also several potential drawbacks to an esophageal stent procedure: Complications: Bleeding, infection, and perforation of the esophagus may occur. Stent migration: Stent may move causing symptoms to recur or lead to other complications. Stent obstruction: Blockage can occur, repeating symptoms or other complications. Stent related pain: Chest or throat pain may occur after the procedure; requiring additional treatment or adjustment of the stent. Stent removal: Check with your doctor on the stent type used for the procedure. Ask if it may need to be removed at a later date and the process and issues that may come about as a result. Additional images References External links Esophageal stent entry in the public domain NCI Dictionary of Cancer Terms Surgical oncology Implants (medicine) Medical devices
Esophageal stent
Biology
648
3,706,186
https://en.wikipedia.org/wiki/Gusset
In sewing, a gusset is a triangular or rhomboidal piece of fabric inserted into a seam to add breadth or reduce stress from tight-fitting clothing. Gussets were used at the shoulders, underarms, and hems of traditional shirts and chemises made of rectangular lengths of linen to shape the garments to the body. Gussets are used in manufacturing of modern tights and pantyhose to add breadth at the crotch seam. As with other synthetic underwear, these gussets are often made of moisture-wicking breathable fabrics such as cotton, to keep the genital area dry and ventilated. Gussets are also used when making three-piece bags, for example in a pattern for a bag as a long, wide piece which connects the front piece and back piece. By becoming the sides and bottom of the bag, the gusset opens the bag up beyond what simply attaching the front to the back would do. With reference to the dimension of the gusset, the measurements of a flat bottom bag may be quoted as L×W×G. Pillows too, are often gusseted, generally an inch or two. The side panels thicken the pillow, allowing more stuffing without bulging. The meaning of gusset has expanded beyond fabric, broadly to denote an added patch of joining material that provides structural support. For example, metal gussets are used in bicycle frames to add strength and rigidity. Gussets may be used in retort pouches and other forms of packaging to allow the package to stand. Gusset plates, usually triangular, are often used to join metal plates and can be seen in many metal framed constructions. Expanding folders or accordion folders also employ gussets to allow for expansion when containing more than just a few sheets of paper. The gusset is also a charge in heraldry, as is the gyron (an Old French word for gusset). See also Godet (sewing) Gore (fabrics) Gusset (heraldry) References Sewing Parts of clothing Triangles
Gusset
Technology
422
5,447,226
https://en.wikipedia.org/wiki/Current%20differencing%20buffered%20amplifier
A current differencing buffered amplifier (CDBA) is a multi-terminal active component with two inputs and two outputs and developed by Cevdet Acar and Serdar Özoğuz. Its block diagram can be seen from the figure. It is derived from the current feedback amplifier (CFA). Basic operation The characteristic equation of this element can be given as: , , . Here, the current through the z-terminal follows the difference between the currents through p-terminal and n-terminal. Input terminals p and n are internally grounded. The difference of the input currents is converted into the output voltage Vw, therefore CDBA element can be considered as a special type of current feedback amplifier with differential current input and grounded y input. The CDBA is simplifies the implementation, is free from parasitic capacitances, able to operate in the frequency range of more than hundreds of MHz (even GHz!), and suitable for current mode operation while, it also provides a voltage output. Several voltage and current mode continuous-time filters, oscillators, analog multipliers, inductance simulators and a PID controller have been developed using this active element. References Acar, C., and Ozoguz, S., “A new versatile building block: current differencing buffered amplifier suitable for analog signal processing filters”, Microelectronics Journal, vol. 30, pp. 157–160, 1999. Ali Ümit Keskin, "A Four Quadrant Analog Multiplier employing single CDBA", Analog Integrated Circuits and Signal Processing, vol. 40, no. 1, pp. 99–101, 2004. Tangsrirat, W., Klahan, K., Kaewdang, K., and Surakampontorn, W., “Low-Voltage Wide-Band NMOS-Based Current Differencing Buffered Amplifier” ECTI Transactions on Electrical Eng., Electronics, and Communications, vol. 2, no. 1, pp. 15–22, 2004. Electronic amplifiers
Current differencing buffered amplifier
Technology
425
39,932,177
https://en.wikipedia.org/wiki/Shannon%20capacity%20of%20a%20graph
In graph theory, the Shannon capacity of a graph is a graph invariant defined from the number of independent sets of strong graph products. It is named after American mathematician Claude Shannon. It measures the Shannon capacity of a communications channel defined from the graph, and is upper bounded by the Lovász number, which can be computed in polynomial time. However, the computational complexity of the Shannon capacity itself remains unknown. Graph models of communication channels The Shannon capacity models the amount of information that can be transmitted across a noisy communication channel in which certain signal values can be confused with each other. In this application, the confusion graph or confusability graph describes the pairs of values that can be confused. For instance, suppose that a communications channel has five discrete signal values, any one of which can be transmitted in a single time step. These values may be modeled mathematically as the five numbers 0, 1, 2, 3, or 4 in modular arithmetic modulo 5. However, suppose that when a value is sent across the channel, the value that is received is (mod 5) where represents the noise on the channel and may be any real number in the open interval from −1 to 1. Thus, if the recipient receives a value such as 3.6, it is impossible to determine whether it was originally transmitted as a 3 or as a 4; the two values 3 and 4 can be confused with each other. This situation can be modeled by a graph, a cycle of length 5, in which the vertices correspond to the five values that can be transmitted and the edges of the graph represent values that can be confused with each other. For this example, it is possible to choose two values that can be transmitted in each time step without ambiguity, for instance, the values 1 and 3. These values are far enough apart that they can't be confused with each other: when the recipient receives a value between 0 and 2, it can deduce that the value that was sent must have been 1, and when the recipient receives a value in between 2 and 4, it can deduce that the value that was sent must have been 3. In this way, in steps of communication, the sender can communicate up to different messages. Two is the maximum number of values that the recipient can distinguish from each other: every subset of three or more of the values 0, 1, 2, 3, 4 includes at least one pair that can be confused with each other. Even though the channel has five values that can be sent per time step, effectively only two of them can be used with this coding scheme. However, more complicated coding schemes allow a greater amount of information to be sent across the same channel, by using codewords of length greater than one. For instance, suppose that in two consecutive steps the sender transmits one of the five code words "11", "23", "35", "54", or "42". (Here, the quotation marks indicate that these words should be interpreted as strings of symbols, not as decimal numbers.) Each pair of these code words includes at least one position where its values differ by two or more modulo 5; for instance, "11" and "23" differ by two in their second position, while "23" and "42" differ by two in their first position. Therefore, a recipient of one of these code words will always be able to determine unambiguously which one was sent: no two of these code words can be confused with each other. By using this method, in steps of communication, the sender can communicate up to messages, significantly more than the that could be transmitted with the simpler one-digit code. The effective number of values that can be transmitted per unit time step is . In graph-theoretic terms, this means that the Shannon capacity of the 5-cycle is at least . As showed, this bound is tight: it is not possible to find a more complicated system of code words that allows even more different messages to be sent in the same amount of time, so the Shannon capacity of the 5-cycle is Relation to independent sets If a graph represents a set of symbols and the pairs of symbols that can be confused with each other, then a subset of symbols avoids all confusable pairs if and only if is an independent set in the graph, a subset of vertices that does not include both endpoints of any edge. The maximum possible size of a subset of the symbols that can all be distinguished from each other is the independence number of the graph, the size of its maximum independent set. For instance, ': the 5-cycle has independent sets of two vertices, but not larger. For codewords of longer lengths, one can use independent sets in larger graphs to describe the sets of codewords that can be transmitted without confusion. For instance, for the same example of five symbols whose confusion graph is , there are 25 strings of length two that can be used in a length-2 coding scheme. These strings may be represented by the vertices of a graph with 25 vertices. In this graph, each vertex has eight neighbors, the eight strings that it can be confused with. A subset of length-two strings forms a code with no possible confusion if and only if it corresponds to an independent set of this graph. The set of code words {"11", "23", "35", "54", "42"} forms one of these independent sets, of maximum size. If is a graph representing the signals and confusable pairs of a channel, then the graph representing the length-two codewords and their confusable pairs is , where the symbol represents the strong product of graphs. This is a graph that has a vertex for each pair of a vertex in the first argument of the product and a vertex in the second argument of the product. Two distinct pairs and are adjacent in the strong product if and only if and are identical or adjacent, and and are identical or adjacent. More generally, the codewords of length  can be represented by the graph , the -fold strong product of with itself, and the maximum number of codewords of this length that can be transmitted without confusion is given by the independence number . The effective number of signals transmitted per unit time step is the th root of this number, . Using these concepts, the Shannon capacity may be defined as the limit (as becomes arbitrarily large) of the effective number of signals per time step of arbitrarily long confusion-free codes. Computational complexity The computational complexity of the Shannon capacity is unknown, and even the value of the Shannon capacity for certain small graphs such as (a cycle graph of seven vertices) remains unknown. A natural approach to this problem would be to compute a finite number of powers of the given graph , find their independence numbers, and infer from these numbers some information about the limiting behavior of the sequence from which the Shannon capacity is defined. However (even ignoring the computational difficulty of computing the independence numbers of these graphs, an NP-hard problem) the unpredictable behavior of the sequence of independence numbers of powers of implies that this approach cannot be used to accurately approximate the Shannon capacity. Upper bounds In part because the Shannon capacity is difficult to compute, researchers have looked for other graph invariants that are easy to compute and that provide bounds on the Shannon capacity. Lovász number The Lovász number (G) is a different graph invariant, that can be computed numerically to high accuracy in polynomial time by an algorithm based on the ellipsoid method. The Shannon capacity of a graph G is bounded from below by α(G), and from above by (G). In some cases, (G) and the Shannon capacity coincide; for instance, for the graph of a pentagon, both are equal to . However, there exist other graphs for which the Shannon capacity and the Lovász number differ. Haemers' bound Haemers provided another upper bound on the Shannon capacity, which is sometimes better than Lovász bound: where B is an n × n matrix over some field, such that bii ≠ 0 and bij = 0 if vertices i and j are not adjacent. References Graph invariants Information theory
Shannon capacity of a graph
Mathematics,Technology,Engineering
1,676
41,288
https://en.wikipedia.org/wiki/Inverse-square%20law
In science, an inverse-square law is any scientific law stating that the observed "intensity" of a specified physical quantity is inversely proportional to the square of the distance from the source of that physical quantity. The fundamental cause for this can be understood as geometric dilution corresponding to point-source radiation into three-dimensional space. Radar energy expands during both the signal transmission and the reflected return, so the inverse square for both paths means that the radar will receive energy according to the inverse fourth power of the range. To prevent dilution of energy while propagating a signal, certain methods can be used such as a waveguide, which acts like a canal does for water, or how a gun barrel restricts hot gas expansion to one dimension in order to prevent loss of energy transfer to a bullet. Formula In mathematical notation the inverse square law can be expressed as an intensity (I) varying as a function of distance (d) from some centre. The intensity is proportional (see ∝) to the reciprocal of the square of the distance thus: It can also be mathematically expressed as : or as the formulation of a constant quantity: The divergence of a vector field which is the resultant of radial inverse-square law fields with respect to one or more sources is proportional to the strength of the local sources, and hence zero outside sources. Newton's law of universal gravitation follows an inverse-square law, as do the effects of electric, light, sound, and radiation phenomena. Justification The inverse-square law generally applies when some force, energy, or other conserved quantity is evenly radiated outward from a point source in three-dimensional space. Since the surface area of a sphere (which is 4πr2) is proportional to the square of the radius, as the emitted radiation gets farther from the source, it is spread out over an area that is increasing in proportion to the square of the distance from the source. Hence, the intensity of radiation passing through any unit area (directly facing the point source) is inversely proportional to the square of the distance from the point source. Gauss's law for gravity is similarly applicable, and can be used with any physical quantity that acts in accordance with the inverse-square relationship. Occurrences Gravitation Gravitation is the attraction between objects that have mass. Newton's law states: If the distribution of matter in each body is spherically symmetric, then the objects can be treated as point masses without approximation, as shown in the shell theorem. Otherwise, if we want to calculate the attraction between massive bodies, we need to add all the point-point attraction forces vectorially and the net attraction might not be exact inverse square. However, if the separation between the massive bodies is much larger compared to their sizes, then to a good approximation, it is reasonable to treat the masses as a point mass located at the object's center of mass while calculating the gravitational force. As the law of gravitation, this law was suggested in 1645 by Ismaël Bullialdus. But Bullialdus did not accept Kepler's second and third laws, nor did he appreciate Christiaan Huygens's solution for circular motion (motion in a straight line pulled aside by the central force). Indeed, Bullialdus maintained the sun's force was attractive at aphelion and repulsive at perihelion. Robert Hooke and Giovanni Alfonso Borelli both expounded gravitation in 1666 as an attractive force. Hooke's lecture "On gravity" was at the Royal Society, in London, on 21 March. Borelli's "Theory of the Planets" was published later in 1666. Hooke's 1670 Gresham lecture explained that gravitation applied to "all celestiall bodys" and added the principles that the gravitating power decreases with distance and that in the absence of any such power bodies move in straight lines. By 1679, Hooke thought gravitation had inverse square dependence and communicated this in a letter to Isaac Newton: my supposition is that the attraction always is in duplicate proportion to the distance from the center reciprocall. Hooke remained bitter about Newton claiming the invention of this principle, even though Newton's 1686 Principia acknowledged that Hooke, along with Wren and Halley, had separately appreciated the inverse square law in the solar system, as well as giving some credit to Bullialdus. Electrostatics The force of attraction or repulsion between two electrically charged particles, in addition to being directly proportional to the product of the electric charges, is inversely proportional to the square of the distance between them; this is known as Coulomb's law. The deviation of the exponent from 2 is less than one part in 1015. Light and other electromagnetic radiation The intensity (or illuminance or irradiance) of light or other linear waves radiating from a point source (energy per unit of area perpendicular to the source) is inversely proportional to the square of the distance from the source, so an object (of the same size) twice as far away receives only one-quarter the energy (in the same time period). More generally, the irradiance, i.e., the intensity (or power per unit area in the direction of propagation), of a spherical wavefront varies inversely with the square of the distance from the source (assuming there are no losses caused by absorption or scattering). For example, the intensity of radiation from the Sun is 9126 watts per square meter at the distance of Mercury (0.387 AU); but only 1367 watts per square meter at the distance of Earth (1 AU)—an approximate threefold increase in distance results in an approximate ninefold decrease in intensity of radiation. For non-isotropic radiators such as parabolic antennas, headlights, and lasers, the effective origin is located far behind the beam aperture. If you are close to the origin, you don't have to go far to double the radius, so the signal drops quickly. When you are far from the origin and still have a strong signal, like with a laser, you have to travel very far to double the radius and reduce the signal. This means you have a stronger signal or have antenna gain in the direction of the narrow beam relative to a wide beam in all directions of an isotropic antenna. In photography and stage lighting, the inverse-square law is used to determine the “fall off” or the difference in illumination on a subject as it moves closer to or further from the light source. For quick approximations, it is enough to remember that doubling the distance reduces illumination to one quarter; or similarly, to halve the illumination increase the distance by a factor of 1.4 (the square root of 2), and to double illumination, reduce the distance to 0.7 (square root of 1/2). When the illuminant is not a point source, the inverse square rule is often still a useful approximation; when the size of the light source is less than one-fifth of the distance to the subject, the calculation error is less than 1%. The fractional reduction in electromagnetic fluence (Φ) for indirectly ionizing radiation with increasing distance from a point source can be calculated using the inverse-square law. Since emissions from a point source have radial directions, they intercept at a perpendicular incidence. The area of such a shell is 4πr 2 where r is the radial distance from the center. The law is particularly important in diagnostic radiography and radiotherapy treatment planning, though this proportionality does not hold in practical situations unless source dimensions are much smaller than the distance. As stated in Fourier theory of heat “as the point source is magnification by distances, its radiation is dilute proportional to the sin of the angle, of the increasing circumference arc from the point of origin”. Example Let P  be the total power radiated from a point source (for example, an omnidirectional isotropic radiator). At large distances from the source (compared to the size of the source), this power is distributed over larger and larger spherical surfaces as the distance from the source increases. Since the surface area of a sphere of radius r is A = 4πr 2, the intensity I (power per unit area) of radiation at distance r is The energy or intensity decreases (divided by 4) as the distance r is doubled; if measured in dB would decrease by 6.02 dB per doubling of distance. When referring to measurements of power quantities, a ratio can be expressed as a level in decibels by evaluating ten times the base-10 logarithm of the ratio of the measured quantity to the reference value. Sound in a gas In acoustics, the sound pressure of a spherical wavefront radiating from a point source decreases by 50% as the distance r is doubled; measured in dB, the decrease is still 6.02 dB, since dB represents an intensity ratio. The pressure ratio (as opposed to power ratio) is not inverse-square, but is inverse-proportional (inverse distance law): The same is true for the component of particle velocity that is in-phase with the instantaneous sound pressure : In the near field is a quadrature component of the particle velocity that is 90° out of phase with the sound pressure and does not contribute to the time-averaged energy or the intensity of the sound. The sound intensity is the product of the RMS sound pressure and the in-phase component of the RMS particle velocity, both of which are inverse-proportional. Accordingly, the intensity follows an inverse-square behaviour: Field theory interpretation For an irrotational vector field in three-dimensional space, the inverse-square law corresponds to the property that the divergence is zero outside the source. This can be generalized to higher dimensions. Generally, for an irrotational vector field in n-dimensional Euclidean space, the intensity "I" of the vector field falls off with the distance "r" following the inverse (n − 1)th power law given that the space outside the source is divergence free. Non-Euclidean implications The inverse-square law, fundamental in Euclidean spaces, also applies to non-Euclidean geometries, including hyperbolic space. The curvature present in these spaces alters physical laws, influencing a variety of fields such as cosmology, general relativity, and string theory. John D. Barrow, in his 2020 paper "Non-Euclidean Newtonian Cosmology," expands on the behavior of force (F) and potential (Φ) within hyperbolic 3-space (H3). He explains that F and Φ obey the relationships F ∝ 1 / R² sinh²(r/R) and Φ ∝ coth(r/R), where R represents the curvature radius and r represents the distance from the focal point. The concept of spatial dimensionality, first proposed by Immanuel Kant, remains a topic of debate concerning the inverse-square law. Dimitria Electra Gatzia and Rex D. Ramsier, in their 2021 paper, contend that the inverse-square law is more closely related to force distribution symmetry than to the dimensionality of space. In the context of non-Euclidean geometries and general relativity, deviations from the inverse-square law do not arise from the law itself but rather from the assumption that the force between two bodies is instantaneous, which contradicts special relativity. General relativity reinterprets gravity as the curvature of spacetime, leading particles to move along geodesics in this curved spacetime. History John Dumbleton of the 14th-century Oxford Calculators, was one of the first to express functional relationships in graphical form. He gave a proof of the mean speed theorem stating that "the latitude of a uniformly difform movement corresponds to the degree of the midpoint" and used this method to study the quantitative decrease in intensity of illumination in his Summa logicæ et philosophiæ naturalis (ca. 1349), stating that it was not linearly proportional to the distance, but was unable to expose the Inverse-square law. In proposition 9 of Book 1 in his book Ad Vitellionem paralipomena, quibus astronomiae pars optica traditur (1604), the astronomer Johannes Kepler argued that the spreading of light from a point source obeys an inverse square law: In 1645, in his book Astronomia Philolaica ..., the French astronomer Ismaël Bullialdus (1605–1694) refuted Johannes Kepler's suggestion that "gravity" weakens as the inverse of the distance; instead, Bullialdus argued, "gravity" weakens as the inverse square of the distance: In England, the Anglican bishop Seth Ward (1617–1689) publicized the ideas of Bullialdus in his critique In Ismaelis Bullialdi astronomiae philolaicae fundamenta inquisitio brevis (1653) and publicized the planetary astronomy of Kepler in his book Astronomia geometrica (1656). In 1663–1664, the English scientist Robert Hooke was writing his book Micrographia (1666) in which he discussed, among other things, the relation between the height of the atmosphere and the barometric pressure at the surface. Since the atmosphere surrounds the Earth, which itself is a sphere, the volume of atmosphere bearing on any unit area of the Earth's surface is a truncated cone (which extends from the Earth's center to the vacuum of space; obviously only the section of the cone from the Earth's surface to space bears on the Earth's surface). Although the volume of a cone is proportional to the cube of its height, Hooke argued that the air's pressure at the Earth's surface is instead proportional to the height of the atmosphere because gravity diminishes with altitude. Although Hooke did not explicitly state so, the relation that he proposed would be true only if gravity decreases as the inverse square of the distance from the Earth's center. See also Flux Antenna (radio) Gauss's law Kepler's laws of planetary motion Kepler problem Telecommunications, particularly: William Thomson, 1st Baron Kelvin Power-aware routing protocols Inverse proportionality Multiplicative inverse Distance decay Fermi paradox Square–cube law Principle of similitude References External links Damping of sound level with distance Sound pressure p and the inverse distance law 1/r Philosophy of physics Scientific method
Inverse-square law
Physics,Mathematics
2,976
15,387
https://en.wikipedia.org/wiki/Irreducible%20complexity
Irreducible complexity (IC) is the argument that certain biological systems with multiple interacting parts would not function if one of the parts were removed, so supposedly could not have evolved by successive small modifications from earlier less complex systems through natural selection, which would need all intermediate precursor systems to have been fully functional. This negative argument is then complemented by the claim that the only alternative explanation is a "purposeful arrangement of parts" inferring design by an intelligent agent. Irreducible complexity has become central to the creationist concept of intelligent design (ID), but the concept of irreducible complexity has been rejected by the scientific community, which regards intelligent design as pseudoscience. Irreducible complexity and specified complexity, are the two main arguments used by intelligent-design proponents to support their version of the theological argument from design. Behe introduced the expression irreducible complexity along with a full account of his arguments in his 1996 book Darwin's Black Box, and he said it made evolution through natural selection of random mutations impossible, or extremely improbable. This was based on the mistaken assumption that evolution relies on improvement of existing functions, ignoring how complex adaptations originate from changes in function, and disregarding published research. Evolutionary biologists have published rebuttals showing how systems discussed by Behe can evolve. In the 2005 Kitzmiller v. Dover Area School District trial, Behe gave testimony on the subject of irreducible complexity. The court found that "Professor Behe's claim for irreducible complexity has been refuted in peer-reviewed research papers and has been rejected by the scientific community at large." Definitions Michael Behe defined irreducible complexity in natural selection in terms of well-matched parts in his 1996 book Darwin's Black Box: ... a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. A second definition given by Behe in 2000 (his "evolutionary definition") states: An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway. Intelligent-design advocate William A. Dembski assumed an "original function" in his 2002 definition: A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, nonarbitrarily individuated parts such that each part in the set is indispensable to maintaining the system's basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. History Forerunners The argument from irreducible complexity is a descendant of the teleological argument for God (the argument from design or from complexity). This states that complex functionality in the natural world which looks designed is evidence of an intelligent creator. William Paley famously argued, in his 1802 watchmaker analogy, that complexity in nature implies a God for the same reason that the existence of a watch implies the existence of a watchmaker. This argument has a long history, and one can trace it back at least as far as Cicero's De Natura Deorum ii.34, written in 45 BC. Up to the 18th century Galen (1st and 2nd centuries AD) wrote about the large number of parts of the body and their relationships, which observation was cited as evidence for creation. The idea that the interdependence between parts would have implications for the origins of living things was raised by writers starting with Pierre Gassendi in the mid-17th century and by John Wilkins (1614–1672), who wrote (citing Galen), "Now to imagine, that all these things, according to their several kinds, could be brought into this regular frame and order, to which such an infinite number of Intentions are required, without the contrivance of some wise Agent, must needs be irrational in the highest degree." In the late 17th-century, Thomas Burnet referred to "a multitude of pieces aptly joyn'd" to argue against the eternity of life. In the early 18th century, Nicolas Malebranche wrote "An organized body contains an infinity of parts that mutually depend upon one another in relation to particular ends, all of which must be actually formed in order to work as a whole", arguing in favor of preformation, rather than epigenesis, of the individual; and a similar argument about the origins of the individual was made by other 18th-century students of natural history. In his 1790 book, The Critique of Judgment, Kant is said by Guyer to argue that "we cannot conceive how a whole that comes into being only gradually from its parts can nevertheless be the cause of the properties of those parts". 19th century Chapter XV of Paley's Natural Theology discusses at length what he called "relations" of parts of living things as an indication of their design. Georges Cuvier applied his principle of the correlation of parts to describe an animal from fragmentary remains. For Cuvier, this related to another principle of his, the conditions of existence, which excluded the possibility of transmutation of species. While he did not originate the term, Charles Darwin identified the argument as a possible way to falsify a prediction of the theory of evolution at the outset. In The Origin of Species (1859), he wrote, "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case." Darwin's theory of evolution challenges the teleological argument by postulating an alternative explanation to that of an intelligent designer—namely, evolution by natural selection. By showing how simple unintelligent forces can ratchet up designs of extraordinary complexity without invoking outside design, Darwin showed that an intelligent designer was not the necessary conclusion to draw from complexity in nature. The argument from irreducible complexity attempts to demonstrate that certain biological features cannot be purely the product of Darwinian evolution. In the late 19th century, in a dispute between supporters of the adequacy of natural selection and those who held for inheritance of acquired characteristics, one of the arguments made repeatedly by Herbert Spencer, and followed by others, depended on what Spencer referred to as co-adaptation of co-operative parts, as in: "We come now to Professor Weismann's endeavour to disprove my second thesis—that it is impossible to explain by natural selection alone the co-adaptation of co-operative parts. It is thirty years since this was set forth in 'The Principles of Biology.' In § 166, I instanced the enormous horns of the extinct Irish elk, and contended that in this and in kindred cases, where for the efficient use of some one enlarged part many other parts have to be simultaneously enlarged, it is out of the question to suppose that they can have all spontaneously varied in the required proportions."One example of a response was in Section III(γ) pages 32-42 of See also Chapter VII, § 12(1), pages 237-238 in: Both of these referred to what has become known as the Baldwin effect. An analysis of both sides of the issue is: Darwin responded to Spencer's objections in chapter XXV of The Variation of Animals and Plants Under Domestication (1868). The history of this concept in the dispute has been characterized: "An older and more religious tradition of idealist thinkers were committed to the explanation of complex adaptive contrivances by intelligent design. ... Another line of thinkers, unified by the recurrent publications of Herbert Spencer, also saw co-adaptation as a composed, irreducible whole, but sought to explain it by the inheritance of acquired characteristics." St. George Jackson Mivart raised the objection to natural selection that "Complex and simultaneous co-ordinations ... until so far developed as to effect the requisite junctions, are useless". In the 2012 book Evolution and Belief, Confessions of a Religious Paleontologist, Robert J. Asher said this "amounts to the concept of 'irreducible complexity' as defined by ... Michael Behe". 20th century Hermann Muller, in the early 20th century, discussed a concept similar to irreducible complexity. However, far from seeing this as a problem for evolution, he described the "interlocking" of biological features as a consequence to be expected of evolution, which would lead to irreversibility of some evolutionary changes. He wrote, "Being thus finally woven, as it were, into the most intimate fabric of the organism, the once novel character can no longer be withdrawn with impunity, and may have become vitally necessary." In 1975 Thomas H. Frazzetta published a book-length study of a concept similar to irreducible complexity, explained by gradual, step-wise, non-teleological evolution. Frazzetta wrote: "A complex adaptation is one constructed of several components that must blend together operationally to make the adaptation 'work'. It is analogous to a machine whose performance depends upon careful cooperation among its parts. In the case of the machine, no single part can greatly be altered without changing the performance of the entire machine." The machine that he chose as an analog is the Peaucellier–Lipkin linkage, and one biological system given extended description was the jaw apparatus of a python. The conclusion of this investigation, rather than that evolution of a complex adaptation was impossible, "awed by the adaptations of living things, to be stunned by their complexity and suitability", was "to accept the inescapable but not humiliating fact that much of mankind can be seen in a tree or a lizard." In 1985 Cairns-Smith wrote of "interlocking": "How can a complex collaboration between components evolve in small steps?" and used the analogy of the scaffolding called centering—used to build an arch then removed afterwards: "Surely there was 'scaffolding'. Before the multitudinous components of present biochemistry could come to lean together they had to lean on something else." However, neither Muller or Cairns-Smith claimed their ideas as evidence of something supernatural. An early concept of irreducibly complex systems comes from Ludwig von Bertalanffy (1901–1972), an Austrian biologist. He believed that complex systems must be examined as complete, irreducible systems in order to fully understand how they work. He extended his work on biological complexity into a general theory of systems in a book titled General Systems Theory. After James Watson and Francis Crick published the structure of DNA in the early 1950s, General Systems Theory lost many of its adherents in the physical and biological sciences. However, systems theory remained popular in the social sciences long after its demise in the physical and biological sciences. Creationism Versions of the irreducible complexity argument have been common in young Earth creationist (YEC) creation science journals. For example, in the July 1965 issue of Creation Research Society Quarterly Harold W. Clark argued that the complex interaction of yucca moths with the plants they fertilize would not function if it was incomplete, so could not have evolved; "The whole procedure points so strongly to intelligent design that it is difficult to escape the conclusion that the hand of a wise and beneficent creator has been involved." In 1974 the YEC Henry M. Morris introduced an irreducible complexity concept in his creation science book Scientific Creationism, in which he wrote; "The creationist maintains that the degree of complexity and order which science has discovered in the universe could never be generated by chance or accident." He continued; "This issue can actually be attacked quantitatively, using simple principles of mathematical probability. The problem is simply whether a complex system, in which many components function unitedly together, and in which each component is uniquely necessary to the efficient functioning of the whole, could ever arise by random processes." In 1975 Duane Gish wrote in The Amazing Story of Creation from Science and the Bible; "The creationist maintains that the degree of complexity and order which science has discovered in the universe could never be generated by chance or accident." A 1980 article in the creation science magazine Creation by the YEC Ariel A. Roth said "Creation and various other views can be supported by the scientific data that reveal that the spontaneous origin of the complex integrated biochemical systems of even the simplest organisms is, at best, a most improbable event". In 1981, defending the creation science position in the trial McLean v. Arkansas, Roth said of "complex integrated structures": "This system would not be functional until all the parts were there ... How did these parts survive during evolution ...?" In 1985, countering the creationist claims that all the changes would be needed at once, Cairns-Smith wrote of "interlocking": "How can a complex collaboration between components evolve in small steps?" and used the analogy of the scaffolding called centering—used to build an arch then removed afterwards: "Surely there was 'scaffolding'. Before the multitudinous components of present biochemistry could come to lean together they had to lean on something else." Neither Muller or Cairns-Smith said their ideas were evidence of anything supernatural. The bacterial flagellum featured in creation science literature. Morris later claimed that one of their Institute for Creation Research "scientists (the late Dr. Dick Bliss) was using this example in his talks on creation a generation ago". In December 1992 the creation science magazine Creation called bacterial flagella "rotary engines", and dismissed the possibility that these "incredibly complicated arrangements of matter" could have "evolved by selection of chance mutations. The alternative explanation, that they were created, is much more reasonable." An article in the Creation Research Society Magazine for June 1994 called a flagellum a "bacterial nanomachine", forming the "bacterial rotor-flagellar complex" where "it is clear from the details of their operation that nothing about them works unless every one of their complexly fashioned and integrated components are in place", hard to explain by natural selection. The abstract said that in "terms of biophysical complexity, the bacterial rotor-flagellum is without precedent in the living world. ... To evolutionists, the system presents an enigma; to creationists, if offers clear and compelling evidence of purposeful intelligent design." Intelligent design The biology supplementary textbook for schools Of Pandas and People was drafted presenting creation science arguments, but shortly after the Edwards v. Aguillard ruling, that it was unconstitutional to teach creationism in public school science classes, the authors changed the wording to "intelligent design", introducing the new meaning of this term when the book was published in 1989. In a separate response to the same ruling, law professor Phillip E. Johnson wrote Darwin on Trial, published in 1991, and at a conference in March 1992 brought together key figures in what he later called the 'wedge movement', including biochemistry professor Michael Behe. According to Johnson, around 1992 Behe developed his ideas of what he later called his "irreducible complexity" concept, and first presented these ideas in June 1993 when the "Johnson-Behe cadre of scholars" met at Pajaro Dunes in California. The second edition of Of Pandas and People, published in 1993, had extensive revisions to Chapter 6 Biochemical Similarities with new sections on the complex mechanism of blood clotting and on the origin of proteins. Behe was not named as their author, but in Doubts About Darwin: A History of Intelligent Design, published in 2003, historian Thomas Woodward wrote that "Michael Behe assisted in the rewriting of a chapter on biochemistry in a revised edition of Pandas. The book stands as one of the milestones in the infancy of Design." On Access Research Network [3 February 1999] Behe posted "Molecular Machines: Experimental Support for the Design Inference" with a note that "This paper was originally presented in the Summer of 1994 at the meeting of the C. S. Lewis Society, Cambridge University." An "Irreducible Complexity" section quoted Darwin, then discussed "the humble mousetrap", and "Molecular Machines", going into detail about cilia before saying "Other examples of irreducible complexity abound, including aspects of protein transport, blood clotting, closed circular DNA, electron transport, the bacterial flagellum, telomeres, photosynthesis, transcription regulation, and much more. Examples of irreducible complexity can be found on virtually every page of a biochemistry textbook." Suggesting "these things cannot be explained by Darwinian evolution," he said they had been neglected by the scientific community. Behe first published the term "irreducible complexity" in his 1996 book Darwin's Black Box, where he set out his ideas about theoretical properties of some complex biochemical cellular systems, now including the bacterial flagellum. He posits that evolutionary mechanisms cannot explain the development of such "irreducibly complex" systems. Notably, Behe credits philosopher William Paley for the original concept (alone among the predecessors). Intelligent design advocates argue that irreducibly complex systems must have been deliberately engineered by some form of intelligence. In 2001, Behe wrote: "[T]here is an asymmetry between my current definition of irreducible complexity and the task facing natural selection. I hope to repair this defect in future work." Behe specifically explained that the "current definition puts the focus on removing a part from an already functioning system", but the "difficult task facing Darwinian evolution, however, would not be to remove parts from sophisticated pre-existing systems; it would be to bring together components to make a new system in the first place". In the 2005 Kitzmiller v. Dover Area School District trial, Behe testified under oath that he "did not judge [the asymmetry] serious enough to [have revised the book] yet." Behe additionally testified that the presence of irreducible complexity in organisms would not rule out the involvement of evolutionary mechanisms in the development of organic life. He further testified that he knew of no earlier "peer reviewed articles in scientific journals discussing the intelligent design of the blood clotting cascade," but that there were "probably a large number of peer reviewed articles in science journals that demonstrate that the blood clotting system is indeed a purposeful arrangement of parts of great complexity and sophistication." (The judge ruled that "intelligent design is not science and is essentially religious in nature".) According to the theory of evolution, genetic variations occur without specific design or intent. The environment "selects" the variants that have the highest fitness, which are then passed on to the next generation of organisms. Change occurs by the gradual operation of natural forces over time, perhaps slowly, perhaps more quickly (see punctuated equilibrium). This process is able to adapt complex structures from simpler beginnings, or convert complex structures from one function to another (see spandrel). Most intelligent design advocates accept that evolution occurs through mutation and natural selection at the "micro level", such as changing the relative frequency of various beak lengths in finches, but assert that it cannot account for irreducible complexity, because none of the parts of an irreducible system would be functional or advantageous until the entire system is in place. The mousetrap example Behe uses the mousetrap as an illustrative example of this concept. A mousetrap consists of five interacting pieces: the base, the catch, the spring, the hammer, and the hold-down bar. All of these must be in place for the mousetrap to work, as the removal of any one piece destroys the function of the mousetrap. Likewise, he asserts that biological systems require multiple parts working together in order to function. Intelligent design advocates claim that natural selection could not create from scratch those systems for which science is currently unable to find a viable evolutionary pathway of successive, slight modifications, because the selectable function is only present when all parts are assembled. In his 2008 book Only A Theory, biologist Kenneth R. Miller challenges Behe's claim that the mousetrap is irreducibly complex. Miller observes that various subsets of the five components can be devised to form cooperative units, ones that have different functions from the mousetrap and so, in biological terms, could form functional spandrels before being adapted to the new function of catching mice. In an example taken from his high school experience, Miller recalls that one of his classmates...struck upon the brilliant idea of using an old, broken mousetrap as a spitball catapult, and it worked brilliantly.... It had worked perfectly as something other than a mousetrap.... my rowdy friend had pulled a couple of parts—probably the hold-down bar and catch—off the trap to make it easier to conceal and more effective as a catapult... [leaving] the base, the spring, and the hammer. Not much of a mousetrap, but a helluva spitball launcher.... I realized why [Behe's] mousetrap analogy had bothered me. It was wrong. The mousetrap is not irreducibly complex after all. Other systems identified by Miller that include mousetrap components include the following: use the spitball launcher as a tie clip (same three-part system with different function) remove the spring from the spitball launcher/tie clip to create a two-part key chain (base + hammer) glue the spitball launcher/tie clip to a sheet of wood to create a clipboard (launcher + glue + wood) remove the hold-down bar for use as a toothpick (single element system) The point of the reduction is that—in biology—most or all of the components were already at hand, by the time it became necessary to build a mousetrap. As such, it required far fewer steps to develop a mousetrap than to design all the components from scratch. Thus, the development of the mousetrap, said to consist of five different parts which had no function on their own, has been reduced to one step: the assembly from parts that are already present, performing other functions. Consequences Supporters of intelligent design argue that anything less than the complete form of such a system or organ would not work at all, or would in fact be a detriment to the organism, and would therefore never survive the process of natural selection. Although they accept that some complex systems and organs can be explained by evolution, they claim that organs and biological features which are irreducibly complex cannot be explained by current models, and that an intelligent designer must have created life or guided its evolution. Accordingly, the debate on irreducible complexity concerns two questions: whether irreducible complexity can be found in nature, and what significance it would have if it did exist in nature. Behe's original examples of irreducibly complex mechanisms included the bacterial flagellum of E. coli, the blood clotting cascade, cilia, and the adaptive immune system. Behe argues that organs and biological features which are irreducibly complex cannot be wholly explained by current models of evolution. In explicating his definition of "irreducible complexity" he notes that: An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. Irreducible complexity is not an argument that evolution does not occur, but rather an argument that it is "incomplete". In the last chapter of Darwin's Black Box, Behe goes on to explain his view that irreducible complexity is evidence for intelligent design. Mainstream critics, however, argue that irreducible complexity, as defined by Behe, can be generated by known evolutionary mechanisms. Behe's claim that no scientific literature adequately modeled the origins of biochemical systems through evolutionary mechanisms has been challenged by TalkOrigins. The judge in the Dover trial wrote "By defining irreducible complexity in the way that he has, Professor Behe attempts to exclude the phenomenon of exaptation by definitional fiat, ignoring as he does so abundant evidence which refutes his argument. Notably, the NAS has rejected Professor Behe's claim for irreducible complexity..." Claimed examples Behe and others have suggested a number of biological features that they believed to be irreducibly complex. Blood clotting cascade The process of blood clotting or coagulation cascade in vertebrates is a complex biological pathway which is given as an example of apparent irreducible complexity. The irreducible complexity argument assumes that the necessary parts of a system have always been necessary, and therefore could not have been added sequentially. However, in evolution, something which is at first merely advantageous can later become necessary. Natural selection can lead to complex biochemical systems being built up from simpler systems, or to existing functional systems being recombined as a new system with a different function. For example, one of the clotting factors that Behe listed as a part of the clotting cascade (Factor XII, also called Hageman factor) was later found to be absent in whales, demonstrating that it is not essential for a clotting system. Many purportedly irreducible structures can be found in other organisms as much simpler systems that utilize fewer parts. These systems, in turn, may have had even simpler precursors that are now extinct. Behe has responded to critics of his clotting cascade arguments by suggesting that homology is evidence for evolution, but not for natural selection. The "improbability argument" also misrepresents natural selection. It is correct to say that a set of simultaneous mutations that form a complex protein structure is so unlikely as to be unfeasible, but that is not what Darwin advocated. His explanation is based on small accumulated changes that take place without a final goal. Each step must be advantageous in its own right, although biologists may not yet understand the reason behind all of them—for example, jawless fish accomplish blood clotting with just six proteins instead of the full ten. Eye The eye is frequently cited by intelligent design and creationism advocates as a purported example of irreducible complexity. Behe used the "development of the eye problem" as evidence for intelligent design in Darwin's Black Box. Although Behe acknowledged that the evolution of the larger anatomical features of the eye have been well-explained, he pointed out that the complexity of the minute biochemical reactions required at a molecular level for light sensitivity still defies explanation. Creationist Jonathan Sarfati has described the eye as evolutionary biologists' "greatest challenge as an example of superb 'irreducible complexity' in God's creation", specifically pointing to the supposed "vast complexity" required for transparency. In an often misquoted passage from On the Origin of Species, Charles Darwin appears to acknowledge the eye's development as a difficulty for his theory. However, the quote in context shows that Darwin actually had a very good understanding of the evolution of the eye (see fallacy of quoting out of context). He notes that "to suppose that the eye ... could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree". Yet this observation was merely a rhetorical device for Darwin. He goes on to explain that if gradual evolution of the eye could be shown to be possible, "the difficulty of believing that a perfect and complex eye could be formed by natural selection ... can hardly be considered real". He then proceeded to roughly map out a likely course for evolution using examples of gradually more complex eyes of various species. Since Darwin's day, the eye's ancestry has become much better understood. Although learning about the construction of ancient eyes through fossil evidence is problematic due to the soft tissues leaving no imprint or remains, genetic and comparative anatomical evidence has increasingly supported the idea of a common ancestry for all eyes. Current evidence does suggest possible evolutionary lineages for the origins of the anatomical features of the eye. One likely chain of development is that the eyes originated as simple patches of photoreceptor cells that could detect the presence or absence of light, but not its direction. When, via random mutation across the population, the photosensitive cells happened to have developed on a small depression, it endowed the organism with a better sense of the light's source. This small change gave the organism an advantage over those without the mutation. This genetic trait would then be "selected for" as those with the trait would have an increased chance of survival, and therefore progeny, over those without the trait. Individuals with deeper depressions would be able to discern changes in light over a wider field than those individuals with shallower depressions. As ever deeper depressions were advantageous to the organism, gradually, this depression would become a pit into which light would strike certain cells depending on its angle. The organism slowly gained increasingly precise visual information. And again, this gradual process continued as individuals having a slightly shrunken aperture of the eye had an advantage over those without the mutation as an aperture increases how collimated the light is at any one specific group of photoreceptors. As this trait developed, the eye became effectively a pinhole camera which allowed the organism to dimly make out shapes—the nautilus is a modern example of an animal with such an eye. Finally, via this same selection process, a protective layer of transparent cells over the aperture was differentiated into a crude lens, and the interior of the eye was filled with humours to assist in focusing images. In this way, eyes are recognized by modern biologists as actually a relatively unambiguous and simple structure to evolve, and many of the major developments of the eye's evolution are believed to have taken place over only a few million years, during the Cambrian explosion. Behe asserts that this is only an explanation of the gross anatomical steps, however, and not an explanation of the changes in discrete biochemical systems that would have needed to take place. Behe maintains that the complexity of light sensitivity at the molecular level and the minute biochemical reactions required for those first "simple patches of photoreceptor[s]" still defies explanation, and that the proposed series of infinitesimal steps to get from patches of photoreceptors to a fully functional eye would actually be considered great, complex leaps in evolution if viewed on the molecular scale. Other intelligent design proponents claim that the evolution of the entire visual system would be difficult rather than the eye alone. Flagella The flagella of certain bacteria constitute a molecular motor requiring the interaction of about 40 different protein parts. The flagellum (or cilium) developed from the pre-existing components of the eukaryotic cytoskeleton. In bacterial flagella, strong evidence points to an evolutionary pathway from a Type III secretory system, a simpler bacterial secretion system. Despite this, Behe presents this as a prime example of an irreducibly complex structure defined as "a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning", and argues that since "an irreducibly complex system that is missing a part is by definition nonfunctional", it could not have evolved gradually through natural selection. However, each of the three types of flagella—eukaryotic, bacterial, and archaeal—has been shown to have evolutionary pathways. For archaeal flagella, there is a molecular homology with bacterial Type IV pili, pointing to an evolutionary link. In all these cases, intermediary, simpler forms of the structures are possible and provide partial functionality. Reducible complexity. In contrast to Behe's claims, many proteins can be deleted or mutated and the flagellum still works, even though sometimes at reduced efficiency. In fact, the composition of flagella is surprisingly diverse across bacteria with many proteins only found in some species but not others. Hence the flagellar apparatus is clearly very flexible in evolutionary terms and perfectly able to lose or gain protein components. Further studies have shown that, contrary to claims of "irreducible complexity", flagella and the type-III secretion system share several components which provides strong evidence of a shared evolutionary history (see below). In fact, this example shows how a complex system can evolve from simpler components. Multiple processes were involved in the evolution of the flagellum, including horizontal gene transfer. Evolution from type three secretion systems. The basal body of the flagella has been found to be similar to the Type III secretion system (TTSS), a needle-like structure that pathogenic germs such as Salmonella and Yersinia pestis use to inject toxins into living eukaryote cells. The needle's base has ten elements in common with the flagellum, but it is missing forty of the proteins that make a flagellum work. The TTSS system negates Behe's claim that taking away any one of the flagellum's parts would prevent the system from functioning. On this basis, Kenneth Miller notes that, "The parts of this supposedly irreducibly complex system actually have functions of their own." Studies have also shown that similar parts of the flagellum in different bacterial species can have different functions despite showing evidence of common descent, and that certain parts of the flagellum can be removed without eliminating its functionality. Behe responded to Miller by asking "why doesn't he just take an appropriate bacterial species, knock out the genes for its flagellum, place the bacterium under selective pressure (for mobility, say), and experimentally produce a flagellum—or any equally complex system—in the laboratory?" However a laboratory experiment has been performed where "immotile strains of the bacterium Pseudomonas fluorescens that lack flagella [...] regained flagella within 96 hours via a two-step evolutionary pathway", concluding that "natural selection can rapidly rewire regulatory networks in very few, repeatable mutational steps". Dembski has argued that phylogenetically, the TTSS is found in a narrow range of bacteria which makes it seem to him to be a late innovation, whereas flagella are widespread throughout many bacterial groups, and he argues that it was an early innovation. Against Dembski's argument, different flagella use completely different mechanisms, and publications show a plausible path in which bacterial flagella could have evolved from a secretion system. Cilium motion The cilium construction of axoneme microtubules movement by the sliding of dynein protein was cited by Behe as an example of irreducible complexity. He further said that the advances in knowledge in the subsequent 10 years had shown that the complexity of intraflagellar transport for two hundred components cilium and many other cellular structures is substantially greater than was known earlier. Response of the scientific community Like intelligent design, the concept it seeks to support, irreducible complexity has failed to gain any notable acceptance within the scientific community. Reducibility of "irreducible" systems Researchers have proposed potentially viable evolutionary pathways for allegedly irreducibly complex systems such as blood clotting, the immune system and the flagellum—the three examples Behe proposed. John H. McDonald even showed his example of a mousetrap to be reducible. If irreducible complexity is an insurmountable obstacle to evolution, it should not be possible to conceive of such pathways. Niall Shanks and Karl H. Joplin, both of East Tennessee State University, have shown that systems satisfying Behe's characterization of irreducible biochemical complexity can arise naturally and spontaneously as the result of self-organizing chemical processes. They also assert that what evolved biochemical and molecular systems actually exhibit is "redundant complexity"—a kind of complexity that is the product of an evolved biochemical process. They claim that Behe overestimated the significance of irreducible complexity because of his simple, linear view of biochemical reactions, resulting in his taking snapshots of selective features of biological systems, structures, and processes, while ignoring the redundant complexity of the context in which those features are naturally embedded. They also criticized his over-reliance on overly simplistic metaphors, such as his mousetrap. A computer model of the co-evolution of proteins binding to DNA in the peer-reviewed journal Nucleic Acids Research consisted of several parts (DNA binders and DNA binding sites) which contribute to the basic function; removal of either one leads immediately to the death of the organism. This model fits the definition of irreducible complexity exactly, yet it evolves. (The program can be run from Ev program.) One can compare a mousetrap with a cat in this context. Both normally function so as to control the mouse population. The cat has many parts that can be removed leaving it still functional; for example, its tail can be bobbed, or it can lose an ear in a fight. Comparing the cat and the mousetrap, then, one sees that the mousetrap (which is not alive) offers better evidence, in terms of irreducible complexity, for intelligent design than the cat. Even looking at the mousetrap analogy, several critics have described ways in which the parts of the mousetrap could have independent uses or could develop in stages, demonstrating that it is not irreducibly complex. Moreover, even cases where removing a certain component in an organic system will cause the system to fail do not demonstrate that the system could not have been formed in a step-by-step, evolutionary process. By analogy, stone arches are irreducibly complex—if you remove any stone the arch will collapse—yet humans build them easily enough, one stone at a time, by building over centering that is removed afterward. Similarly, naturally occurring arches of stone form by the weathering away of bits of stone from a large concretion that has formed previously. Evolution can act to simplify as well as to complicate. This raises the possibility that seemingly irreducibly complex biological features may have been achieved with a period of increasing complexity, followed by a period of simplification. A team led by Joseph Thornton, assistant professor of biology at the University of Oregon's Center for Ecology and Evolutionary Biology, using techniques for resurrecting ancient genes, reconstructed the evolution of an apparently irreducibly complex molecular system. The April 7, 2006 issue of Science published this research. Irreducible complexity may not actually exist in nature, and the examples given by Behe and others may not in fact represent irreducible complexity, but can be explained in terms of simpler precursors. The theory of facilitated variation challenges irreducible complexity. Marc W. Kirschner, a professor and chair of Department of Systems Biology at Harvard Medical School, and John C. Gerhart, a professor in Molecular and Cell Biology, University of California, Berkeley, presented this theory in 2005. They describe how certain mutation and changes can cause apparent irreducible complexity. Thus, seemingly irreducibly complex structures are merely "very complex", or they are simply misunderstood or misrepresented. Gradual adaptation to new functions The precursors of complex systems, when they are not useful in themselves, may be useful to perform other, unrelated functions. Evolutionary biologists argue that evolution often works in this kind of blind, haphazard manner in which the function of an early form is not necessarily the same as the function of the later form. The term used for this process is exaptation. The mammalian middle ear (derived from a jawbone) and the panda's thumb (derived from a wrist bone spur) provide classic examples. A 2006 article in Nature demonstrates intermediate states leading toward the development of the ear in a Devonian fish (about 360 million years ago). Furthermore, recent research shows that viruses play a heretofore unexpected role in evolution by mixing and matching genes from various hosts. Arguments for irreducibility often assume that things started out the same way they ended up—as we see them now. However, that may not necessarily be the case. In the Dover trial an expert witness for the plaintiffs, Ken Miller, demonstrated this possibility using Behe's mousetrap analogy. By removing several parts, Miller made the object unusable as a mousetrap, but he pointed out that it was now a perfectly functional, if unstylish, tie clip. Methods by which irreducible complexity may evolve Irreducible complexity can be seen as equivalent to an "uncrossable valley" in a fitness landscape. A number of mathematical models of evolution have explored the circumstances under which such valleys can, nevertheless, be crossed. An example of a structure that is claimed in Dembski's book No Free Lunch to be irreducibly complex, but evidently has evolved, is the protein T-urf13, which is responsible for the cytoplasmic male sterility of waxy corn and is due to a completely new gene. It arose from the fusion of several non-protein-coding fragments of mitochondrial DNA and the occurrence of several mutations, all of which were necessary. Behe's book Darwin Devolves claims that things like this would take billions of years and could not arise from random tinkering, but the corn was bred during the 20th century. When presented with T-urf13 as an example for the evolvability of irreducibly complex systems, the Discovery Institute resorted to its flawed probability argument based on false premises, akin to the Texas sharpshooter fallacy. Falsifiability and experimental evidence Some critics, such as Jerry Coyne (professor of evolutionary biology at the University of Chicago) and Eugenie Scott (a physical anthropologist and former executive director of the National Center for Science Education) have argued that the concept of irreducible complexity and, more generally, intelligent design is not falsifiable and, therefore, not scientific. Behe argues that the theory that irreducibly complex systems could not have evolved can be falsified by an experiment where such systems are evolved. For example, he posits taking bacteria with no flagellum and imposing a selective pressure for mobility. If, after a few thousand generations, the bacteria evolved the bacterial flagellum, then Behe believes that this would refute his theory. This has been done: a laboratory experiment has been performed where "immotile strains of the bacterium Pseudomonas fluorescens that lack flagella [...] regained flagella within 96 hours via a two-step evolutionary pathway", concluding that "natural selection can rapidly rewire regulatory networks in very few, repeatable mutational steps". Other critics take a different approach, pointing to experimental evidence that they consider falsification of the argument for intelligent design from irreducible complexity. For example, Kenneth Miller describes the lab work of Barry G. Hall on E. coli as showing that "Behe is wrong". Other evidence that irreducible complexity is not a problem for evolution comes from the field of computer science, which routinely uses computer analogues of the processes of evolution in order to automatically design complex solutions to problems. The results of such genetic algorithms are frequently irreducibly complex since the process, like evolution, both removes non-essential components over time as well as adding new components. The removal of unused components with no essential function, like the natural process where rock underneath a natural arch is removed, can produce irreducibly complex structures without requiring the intervention of a designer. Researchers applying these algorithms automatically produce human-competitive designs—but no human designer is required. Argument from ignorance Intelligent design proponents attribute to an intelligent designer those biological structures they believe are irreducibly complex and therefore they say a natural explanation is insufficient to account for them. However, critics view irreducible complexity as a special case of the "complexity indicates design" claim, and thus see it as an argument from ignorance and as a God-of-the-gaps argument. Eugenie Scott and Glenn Branch of the National Center for Science Education note that intelligent design arguments from irreducible complexity rest on the false assumption that a lack of knowledge of a natural explanation allows intelligent design proponents to assume an intelligent cause, when the proper response of scientists would be to say that we do not know, and further investigation is needed. Other critics describe Behe as saying that evolutionary explanations are not detailed enough to meet his standards, while at the same time presenting intelligent design as exempt from having to provide any positive evidence at all. False dilemma Irreducible complexity is at its core an argument against evolution. If truly irreducible systems are found, the argument goes, then intelligent design must be the correct explanation for their existence. However, this conclusion is based on the assumption that current evolutionary theory and intelligent design are the only two valid models to explain life, a false dilemma. In the Dover trial At the 2005 Kitzmiller v. Dover Area School District trial, expert witness testimony defending ID and IC was given by Behe and Scott Minnich, who had been one of the "Johnson-Behe cadre of scholars" at Pajaro Dunes in 1993, was prominent in ID, and was now a tenured associate professor in microbiology at the University of Idaho. Behe conceded that there are no peer-reviewed papers supporting his claims that complex molecular systems, like the bacterial flagellum, the blood-clotting cascade, and the immune system, were intelligently designed nor are there any peer-reviewed articles supporting his argument that certain complex molecular structures are "irreducibly complex." There was extensive discussion of IC arguments about the bacterial flagellum, first published in Behe's 1996 book, and when Minnich was asked if similar claims in a 1994 Creation Research Society article presented the same argument, Minnich said he did not have any problem with that statement. In the final ruling of Kitzmiller v. Dover Area School District, Judge Jones specifically singled out irreducible complexity: "... creationists made the same argument that the complexity of the bacterial flagellum supported creationism as Professors Behe and Minnich now make for ID. (P-853; P-845; 37:155–56 (Minnich))." (Page 34) "Professor Behe admitted in "Reply to My Critics" that there was a defect in his view of irreducible complexity because, while it purports to be a challenge to natural selection, it does not actually address "the task facing natural selection." and that "Professor Behe wrote that he hoped to "repair this defect in future work..." (Page 73) "As expert testimony revealed, the qualification on what is meant by "irreducible complexity" renders it meaningless as a criticism of evolution. (3:40 (Miller)). In fact, the theory of evolution proffers exaptation as a well-recognized, well-documented explanation for how systems with multiple parts could have evolved through natural means." (Page 74) "By defining irreducible complexity in the way that he has, Professor Behe attempts to exclude the phenomenon of exaptation by definitional fiat, ignoring as he does so abundant evidence which refutes his argument. Notably, the NAS has rejected Professor Behe's claim for irreducible complexity..." (Page 75) "As irreducible complexity is only a negative argument against evolution, it is refutable and accordingly testable, unlike ID [Intelligent Design], by showing that there are intermediate structures with selectable functions that could have evolved into the allegedly irreducibly complex systems. (2:15–16 (Miller)). Importantly, however, the fact that the negative argument of irreducible complexity is testable does not make testable the argument for ID. (2:15 (Miller); 5:39 (Pennock)). Professor Behe has applied the concept of irreducible complexity to only a few select systems: (1) the bacterial flagellum; (2) the blood-clotting cascade; and (3) the immune system. Contrary to Professor Behe's assertions with respect to these few biochemical systems among the myriad existing in nature, however, Dr. Miller presented evidence, based upon peer-reviewed studies, that they are not in fact irreducibly complex." (Page 76) "...on cross-examination, Professor Behe was questioned concerning his 1996 claim that science would never find an evolutionary explanation for the immune system. He was presented with fifty-eight peer-reviewed publications, nine books, and several immunology textbook chapters about the evolution of the immune system; however, he simply insisted that this was still not sufficient evidence of evolution, and that it was not "good enough." (23:19 (Behe))." (Page 78) "We therefore find that Professor Behe's claim for irreducible complexity has been refuted in peer-reviewed research papers and has been rejected by the scientific community at large. (17:45–46 (Padian); 3:99 (Miller)). Additionally, even if irreducible complexity had not been rejected, it still does not support ID as it is merely a test for evolution, not design. (2:15, 2:35–40 (Miller); 28:63–66 (Fuller)). We will now consider the purportedly "positive argument" for design encompassed in the phrase used numerous times by Professors Behe and Minnich throughout their expert testimony, which is the "purposeful arrangement of parts." Professor Behe summarized the argument as follows: We infer design when we see parts that appear to be arranged for a purpose. The strength of the inference is quantitative; the more parts that are arranged, the more intricately they interact, the stronger is our confidence in design. The appearance of design in aspects of biology is overwhelming. Since nothing other than an intelligent cause has been demonstrated to be able to yield such a strong appearance of design, Darwinian claims notwithstanding, the conclusion that the design seen in life is real design is rationally justified. (18:90–91, 18:109–10 (Behe); 37:50 (Minnich)). As previously indicated, this argument is merely a restatement of the Reverend William Paley's argument applied at the cell level. Minnich, Behe, and Paley reach the same conclusion, that complex organisms must have been designed using the same reasoning, except that Professors Behe and Minnich refuse to identify the designer, whereas Paley inferred from the presence of design that it was God. (1:6–7 (Miller); 38:44, 57 (Minnich)). Expert testimony revealed that this inductive argument is not scientific and as admitted by Professor Behe, can never be ruled out. (2:40 (Miller); 22:101 (Behe); 3:99 (Miller))." (Pages 79–80) Notes and references Further reading External links Supportive Michael J. Behe home page About Irreducible Complexity Discovery Institute Behe's Reply to his Critics (PDF) How to Explain Irreducible Complexity -- A Lab Manual Discovery Institute Institute for Creation Research (PDF) Irreducible Complexity: Definition & Evaluation by Craig Rusbult, Ph.D. Irreducible Complexity Revisited (PDF) Critical Behe, Biochemistry, and the Invisible Hand Darwin vs. Intelligent Design (again), by H. Allen Orr (review of Darwin's Black Box) Devolution: Why intelligent design isn't (The New Yorker) Does irreducible complexity imply Intelligent Design? by Mark Perakh Evolution of the Eye (Video) Zoologist Dan-Erik Nilsson demonstrates eye evolution through intermediate stages with working model. (PBS) Facilitated Variation Himma, Kenneth Einar. Design Arguments for the Existence of God. Internet Encyclopedia of Philosophy: 2. Contemporary Versions of the Design Argument, a. The Argument from Irreducible Biochemical Complexity Kitzmiller vs. Dover transcripts Miller, Kenneth R. textbook website Miller's "The Flagellum Unspun: The Collapse of Irreducible Complexity" Talk.origins archive (see talk.origins) TalkDesign.org (sister site to talk.origins archive on intelligent design) The bacterial flagellar motor: brilliant evolution or intelligent design? Matt Baker, ABC Science, 7 July 2015 Unlocking cell secrets bolsters evolutionists (Chicago Tribune) Biological systems Complex systems theory Creationist objections to evolution Intelligent design
Irreducible complexity
Engineering,Biology
11,009
76,436,818
https://en.wikipedia.org/wiki/NGC%203985
NGC 3985 is a barred spiral galaxy in the constellation Ursa Major. It is located at a distance of about 45 million light years from Earth, which, given its apparent dimensions, means that NGC 3726 is about 18,000 light years across. NGC 3985 is situated north of the celestial equator and, as such, it is more easily visible from the Northern Hemisphere. The galaxy appears to have one spiral arm. NGC 3985 belongs in the NGC 3877 group, which is part of the south Ursa Major groups, part of the Virgo Supercluster. Other galaxies in the same group are NGC 3726, NGC 3893, NGC 3896, NGC 3906, NGC 3928, NGC 3949, and NGC 4010. References Barred spiral galaxies Ursa Major Ursa Major Cluster 3985 06921 +08-22-045 11541+4836 037542
NGC 3985
Astronomy
189
75,977,697
https://en.wikipedia.org/wiki/Swinnerton-Dyer%20polynomial
In algebra, the Swinnerton-Dyer polynomials are a family of polynomials, introduced by Peter Swinnerton-Dyer, that serve as examples where polynomial factorization algorithms have worst-case runtime. They have the property of being reducible modulo every prime, while being irreducible over the rational numbers. They are a standard counterexample in number theory. Given a finite set of prime numbers, the Swinnerton-Dyer polynomial associated to is the polynomial: where the product extends over all choices of sign in the enclosed sum. The polynomial has degree and integer coefficients, which alternate in sign. If , then is reducible modulo for all primes , into linear and quadratic factors, but irreducible over . The Galois group of is . The first few Swinnerton-Dyer polynomials are: References Polynomials
Swinnerton-Dyer polynomial
Mathematics
172
7,092,764
https://en.wikipedia.org/wiki/Characteristic%20mode%20analysis
Characteristic modes (CM) form a set of functions which, under specific boundary conditions, diagonalizes operator relating field and induced sources. Under certain conditions, the set of the CM is unique and complete (at least theoretically) and thereby capable of describing the behavior of a studied object in full. This article deals with characteristic mode decomposition in electromagnetics, a domain in which the CM theory has originally been proposed. Background CM decomposition was originally introduced as set of modes diagonalizing a scattering matrix. The theory has, subsequently, been generalized by Harrington and Mautz for antennas. Harrington, Mautz and their students also successively developed several other extensions of the theory. Even though some precursors were published back in the late 1940s, the full potential of CM has remained unrecognized for an additional 40 years. The capabilities of CM were revisited in 2007 and, since then, interest in CM has dramatically increased. The subsequent boom of CM theory is reflected by the number of prominent publications and applications. Definition For simplicity, only the original form of the CM – formulated for perfectly electrically conducting (PEC) bodies in free space — will be treated in this article. The electromagnetic quantities will solely be represented as Fourier's images in frequency domain. Lorenz's gauge is used. The scattering of an electromagnetic wave on a PEC body is represented via a boundary condition on the PEC body, namely with representing unitary normal to the PEC surface, representing incident electric field intensity, and representing scattered electric field intensity defined as with being imaginary unit, being angular frequency, being vector potential being vacuum permeability, being scalar potential being vacuum permittivity, being scalar Green's function and being wavenumber. The integro-differential operator is the one to be diagonalized via characteristic modes. The governing equation of the CM decomposition is with and being real and imaginary parts of impedance operator, respectively: The operator, is defined by The outcome of (1) is a set of characteristic modes , , accompanied by associated characteristic numbers . Clearly, (1) is a generalized eigenvalue problem, which, however, cannot be analytically solved (except for a few canonical bodies). Therefore, the numerical solution described in the following paragraph is commonly employed. Matrix formulation Discretization of the body of the scatterer into subdomains as and using a set of linearly independent piece-wise continuous functions , , allows current density to be represented as and by applying the Galerkin method, the impedance operator (2) The eigenvalue problem (1) is then recast into its matrix form which can easily be solved using, e.g., the generalized Schur decomposition or the implicitly restarted Arnoldi method yielding a finite set of expansion coefficients and associated characteristic numbers . The properties of the CM decomposition are investigated below. Properties The properties of CM decomposition are demonstrated in its matrix form. First, recall that the bilinear forms and where superscript denotes the Hermitian transpose and where represents an arbitrary surface current distribution, correspond to the radiated power and the reactive net power, respectively. The following properties can then be easily distilled: The weighting matrix is theoretically positive definite and is indefinite. The Rayleigh quotient then spans the range of and indicates whether the characteristic mode is capacitive (), inductive (), or in resonance (). In reality, the Rayleigh quotient is limited by the numerical dynamics of the machine precision used and the number of correctly found modes is limited. The characteristic numbers evolve with frequency, i.e., , they can cross each other, or they can be the same (in case of degeneracies). For this reason, the tracking of modes is often applied to get smooth curves . Unfortunately, this process is partly heuristic and the tracking algorithms are still far from perfection. The characteristic modes can be chosen as real-valued functions, . In other words, characteristic modes form a set of equiphase currents. The CM decomposition is invariant with respect to the amplitude of the characteristic modes. This fact is used to normalize the current so that they radiate unitary radiated power This last relation presents the ability of characteristic modes to diagonalize the impedance operator (2) and demonstrates far field orthogonality, i.e., Modal quantities The modal currents can be used to evaluate antenna parameters in their modal form, for example: modal far-field ( — polarization,  — direction), modal directivity , modal radiation efficiency , modal quality factor , modal impedance . These quantities can be used for analysis, feeding synthesis, radiator's shape optimization, or antenna characterization. Applications and further development The number of potential applications is enormous and still growing: antenna analysis and synthesis, design of MIMO antennas, compact antenna design (RFID, Wi-Fi), UAV antennas, selective excitation of chassis and platforms, model order reduction, bandwidth enhancement, nanotubes and metamaterials, validation of computational electromagnetics codes. The prospective topics include electrically large structures calculated using MLFMA, dielectrics, use of Combined Field Integral Equation, periodic structures, formulation for arrays. Software CM decomposition has recently been implemented in major electromagnetic simulators, namely in FEKO, CST-MWS, and WIPL-D. Other packages are about to support it soon, for example HFSS and CEM One. In addition, there is a plethora of in-house and academic packages which are capable of evaluating CM and many associated parameters. Alternative bases CM are useful to understand radiator's operation better. They have been used with great success for many practical purposes. However, it is important to stress that they are not perfect and it is often better to use other formulations such as energy modes, radiation modes, stored energy modes or radiation efficiency modes. References Electromagnetism Electrodynamics Antennas (radio) Numerical differential equations Computational electromagnetics
Characteristic mode analysis
Physics,Mathematics
1,229
52,768,889
https://en.wikipedia.org/wiki/CSIRO%20Oceans%20and%20Atmosphere
CSIRO Oceans and Atmosphere (O&A) (2014–2022) was one of the then 8 Business Units (formerly: Flagships) of the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia's largest government-supported science research agency. In December 2022 it was merged with CSIRO Land and Water to form a single, larger Business Unit called simply, "CSIRO Environment". History The CSIRO Oceans and Atmosphere (O&A) Business Unit was formed in 2014 as one of the then 10 "Flagship" operational units of the Commonwealth Scientific and Industrial Research Organisation (CSIRO) as part of a major organisational restructure; from 2015 onwards the term "Flagship" was officially dropped. This Business Unit was formed essentially as a synthesis of the pre-existing CSIRO Division of Marine and Atmospheric Research (CMAR), representing the scientific capability, and the previously established Wealth from Oceans (WfO) Flagship, which was the route via which much of the relevant Australian government research funding was directed. In 2016, its Director was Dr. Ken Lee, previously WfO Flagship Director; in 2017 its Director was Dr. Tony Worby, previously with the Antarctic Climate and Ecosystems Cooperative Research Centre (ACE CRC); and for the period 2021–2022 its final Director was Dr. Dan Metcalfe. The O&A Business Unit employed between 350 and 400 staff who were/are located at its various laboratories including Hobart (Tasmania), Aspendale (Victoria), Dutton Park (Queensland), Black Mountain (Canberra) and Floreat Park (Western Australia). For 2016 it was quoted as operating with an annual budget of $108M Australian Dollars with its research organised into the following programs: Climate Science Centre; Coastal Development and Management; Earth System Assessment; Engineering and Technology; Marine Resources and Industries; and Ocean and Climate Dynamics. Certain previous CMAR activities, notably those involving the operation of the Marine National Facility (research vessel) RV Investigator and several scientific collections, are now managed within the separate CSIRO National Facilities and Collections Program. The previous CSIRO Division of Marine and Atmospheric Research was itself formed as a result of a 2005 merger between the former CSIRO Division of Marine Research, with laboratories in Hobart, Brisbane, and Perth, and CSIRO Division of Atmospheric Research, with laboratories in Aspendale and Canberra; the Division of Marine Research was formed in 1997 as a merger between two previous CSIRO Divisions, the Division of Fisheries Research and the Division of Oceanography, both with their headquarters in Hobart since 1984; prior to that time, the Division of Fisheries and Oceanography (subsequently separate Divisions) had occupied facilities in Cronulla, New South Wales since its inception in 1938 (following the CSIRO's departure this site became the New South Wales State Cronulla Fisheries Research Centre). Additional details of the somewhat convoluted organisational history of the relevant Divisions and their predecessors are available here. In December 2022 it was announced that CSIRO Oceans and Atmosphere was to merge with CSIRO Land and Water to form a new Business Unit, simply entitled Environment. Seagoing capabilities Through the 1980s and 1990s the marine Divisions of CSIRO had the use of both the RV Southern Surveyor, equipped for biological as well as oceanographic research, and the purpose-built RV Franklin for physical and chemical oceanographic research, both of which served at various times as the Marine National Facility for the nation (meaning that other agencies could also carry out research using these vessels at what was effectively a subsidised rate by the Australian government). The last of the vessels to be retired, the Southern Surveyor, was replaced in 2014 by a new purpose-built research vessel to serve as the Marine National Facility, the RV Investigator. Coupled with these major vessels, all capable of significant ocean-going research expeditions, staff were able to use a range of smaller boats and sometimes, charter vessels to carry out research in a range of coastal waters. 2016 Climate Science cuts controversy and subsequent partial restoration In February 2016 the chief executive of CSIRO, Dr Larry Marshall, announced that research into the fundamentals of climate science was no longer a priority for CSIRO and up to 110 jobs were feared to be cut from the climate research section(s) of the Oceans and Atmosphere Unit. After overwhelming negative reaction both within Australia and overseas, along with the forced redundancy of prominent climate scientists including the internationally renowned sea level expert Dr John Church, the Australian Government intervened with a directive and promise of new money to support the restoration of 15 jobs and the creation of a new Climate Science Centre to be based in Hobart with a staff of 40, with funding guaranteed for 10 years from 2016, although the expected number of job losses for O&A was still estimated at 75. While the establishment of the new Centre was described as a "major U-turn in the direction of the CSIRO" and a win for the Turnbull government over the previous CSIRO announcement, the generally positive reaction from other scientists was qualified by the fact that the new Centre would still represent a net loss to CSIRO's previous capability in this area. Selected notable scientists associated with O&A and its predecessors Kenneth Radway Allen - fisheries biologist, International Whaling Commission (IWC) panel member, and former head of the CSIRO Division of Fisheries and Oceanography in Cronulla Greg Ayers - atmospheric scientist, Fellow of the Australian Academy of Technological Sciences and Engineering, and subsequently Director of the Australian Bureau of Meteorology, 2009-2012 John A. Church - renowned climate scientist, winner of a number of medals and Fellow of the Australian Academy of Science, also co-convening lead author for the International Panel for Climate Change (IPCC) Fifth Assessment Report Shirley Jeffrey - discoverer of chlorophyll C and internationally renowned microalgal researcher, winner of numerous medals and Fellow of the Australian Academy of Science Peter R. Last - ichthyologist, former curator of the Australian National Fish Collection, and responsible for the description of numerous new shark and ray species; co-author (with John Stevens) of Sharks and Rays of Australia (2009) Trevor McDougall - oceanographer, Fellow of the Royal Society and 2011 winner of the Prince Albert I Medal for significant work in the physical and chemical sciences of the oceans Graeme Pearman - international expert on climate change, winner of numerous medals and Fellow of the Australian Academy of Science Michael Raupach - climate scientist and founding co-chair of the Global Carbon Project (GCP) and Fellow of the Australian Academy of Science Keith J. Sainsbury - researcher on shelf ecosystems and winner of the 2004 Japan Prize for scientific achievement Penny Whetton - climate researcher, a lead author of the IPCC's Third Assessment Report, and of the Fourth Assessment Report which was awarded the 2007 Nobel Peace Prize (jointly with Al Gore) Susan Wijffels - oceanographer with special interest in the international Argo float program; winner the Australian Meteorological and Oceanographic Society's Priestly Medal and the Australian Academy of Science's Dorothy Hill Award in recognition of her efforts to understand the role of the oceans in climate change. Books on CSIRO's marine research activities CSIRO At Sea, a "popular" account of the early research activities of the marine components of the relevant CSIRO Divisions (former Divisions of Fisheries, Fisheries and Oceanography, Oceanography, and Fisheries Research) was published in 1988, a few years after the relocation of the majority of CSIRO's marine research activities to Hobart from Cronulla, New South Wales. See also Network of Aquaculture Centres in Asia-Pacific References External links Former CSIRO Oceans and Atmosphere web page (Archived copy, November 2022) Former CSIRO Wealth from Oceans web page (Archived copy, February 2014) Former CSIRO Marine and Atmospheric Research Division home page (accessed 4 January 2017) CSIRO Marine and Atmospheric Research Publications Lists - CMAR compilations (accessed 4 January 2017) CSIRO Marine and Atmospheric Research Publications as listed by Google Scholar (accessed 4 January 2017) 2014 establishments in Australia 2022 disestablishments in Australia Scientific organisations based in Australia Marine biology Fisheries agencies Oceanography Governmental meteorological agencies in Oceania CSIRO
CSIRO Oceans and Atmosphere
Physics,Biology,Environmental_science
1,665
6,499,752
https://en.wikipedia.org/wiki/Electrical%20fault
In an electric power system, a fault or fault current is any abnormal electric current. For example, a short circuit is a fault in which a live wire touches a neutral or ground wire. An open-circuit fault occurs if a circuit is interrupted by a failure of a current-carrying wire (phase or neutral) or a blown fuse or circuit breaker. In three-phase systems, a fault may involve one or more phases and ground, or may occur only between phases. In a "ground fault" or "earth fault", current flows into the earth. The prospective short-circuit current of a predictable fault can be calculated for most situations. In power systems, protective devices can detect fault conditions and operate circuit breakers and other devices to limit the loss of service due to a failure. In a polyphase system, a fault may affect all phases equally, which is a "symmetric fault". If only some phases are affected, the resulting "asymmetric fault" becomes more complicated to analyse. The analysis of these types of faults is often simplified by using methods such as symmetrical components. The design of systems to detect and interrupt power system faults is the main objective of power-system protection. Transient fault A transient fault is a fault that is no longer present if power is disconnected for a short time and then restored; or an insulation fault which only temporarily affects a device's dielectric properties which are restored after a short time. Many faults in overhead power lines are transient in nature. When a fault occurs, equipment used for power system protection operate to isolate the area of the fault. A transient fault will then clear and the power-line can be returned to service. Typical examples of transient faults include: momentary tree contact bird or other animal contact lightning strike conductor clashing Transmission and distribution systems use an automatic re-close function which is commonly used on overhead lines to attempt to restore power in the event of a transient fault. This functionality is not as common on underground systems as faults there are typically of a persistent nature. Transient faults may still cause damage both at the site of the original fault or elsewhere in the network as fault current is generated. Persistent fault A persistent fault is present regardless of power being applied. Faults in underground power cables are most often persistent due to mechanical damage to the cable, but are sometimes transient in nature due to lightning. Types of fault Asymmetric fault An asymmetric or unbalanced fault does not affect each of the phases equally. Common types of asymmetric fault, and their causes: line-to-line fault - a short circuit between lines, caused by ionization of air, or when lines come into physical contact, for example due to a broken insulator. In transmission line faults, roughly 5% - 10% are asymmetric line-to-line faults. line-to-ground fault - a short circuit between one line and ground, very often caused by physical contact, for example due to lightning or other storm damage. In transmission line faults, roughly 65% - 70% are asymmetric line-to-ground faults. double line-to-ground fault - two lines come into contact with the ground (and each other), also commonly due to storm damage. In transmission line faults, roughly 15% - 20% are asymmetric double line-to-ground. Symmetric fault A symmetric or balanced fault affects each of the phases equally. In transmission line faults, roughly 5% are symmetric. These faults are rare compared to asymmetric faults. Two kinds of symmetric fault are line to line to line (L-L-L) and line to line to line to ground (L-L-L-G). Symmetric faults account for 2 to 5% of all system faults. However, they can cause very severe damage to equipment even though the system remains balanced. Bolted fault One extreme is where the fault has zero impedance, giving the maximum prospective short-circuit current. Notionally, all the conductors are considered connected to ground as if by a metallic conductor; this is called a "bolted fault". It would be unusual in a well-designed power system to have a metallic short circuit to ground but such faults can occur by mischance. In one type of transmission line protection, a "bolted fault" is deliberately introduced to speed up operation of protective devices. Ground fault (earth fault) A ground fault (earth fault) is any failure that allows unintended connection of power circuit conductors with the earth. Such faults can cause objectionable circulating currents, or may energize the housings of equipment at a dangerous voltage. Some special power distribution systems may be designed to tolerate a single ground fault and continue in operation. Wiring codes may require an insulation monitoring device to give an alarm in such a case, so the cause of the ground fault can be identified and remedied. If a second ground fault develops in such a system, it can result in overcurrent or failure of components. Even in systems that are normally connected to ground to limit overvoltages, some applications require a Ground Fault Interrupter or similar device to detect faults to ground. Realistic faults Realistically, the resistance in a fault can be from close to zero to fairly high relative to the load resistance. A large amount of power may be consumed in the fault, compared with the zero-impedance case where the power is zero. Also, arcs are highly non-linear, so a simple resistance is not a good model. All possible cases need to be considered for a good analysis. Arcing fault Where the system voltage is high enough, an electric arc may form between power system conductors and ground. Such an arc can have a relatively high impedance (compared to the normal operating levels of the system) and can be difficult to detect by simple overcurrent protection. For example, an arc of several hundred amperes on a circuit normally carrying a thousand amperes may not trip overcurrent circuit breakers but can do enormous damage to bus bars or cables before it becomes a complete short circuit. Utility, industrial, and commercial power systems have additional protection devices to detect relatively small but undesired currents escaping to ground. In residential wiring, electrical regulations may now require arc-fault circuit interrupters on building wiring circuits, to detect small arcs before they cause damage or a fire. For example, these measures are taken in locations involving running water. Analysis Symmetric faults can be analyzed via the same methods as any other phenomena in power systems, and in fact many software tools exist to accomplish this type of analysis automatically (see power flow study). However, there is another method which is as accurate and is usually more instructive. First, some simplifying assumptions are made. It is assumed that all electrical generators in the system are in phase, and operating at the nominal voltage of the system. Electric motors can also be considered to be generators, because when a fault occurs, they usually supply rather than draw power. The voltages and currents are then calculated for this base case. Next, the location of the fault is considered to be supplied with a negative voltage source, equal to the voltage at that location in the base case, while all other sources are set to zero. This method makes use of the principle of superposition. To obtain a more accurate result, these calculations should be performed separately for three separate time ranges: subtransient is first, and is associated with the largest currents transient comes between subtransient and steady-state steady-state occurs after all the transients have had time to settle An asymmetric fault breaks the underlying assumptions used in three-phase power, namely that the load is balanced on all three phases. Consequently, it is impossible to directly use tools such as the one-line diagram, where only one phase is considered. However, due to the linearity of power systems, it is usual to consider the resulting voltages and currents as a superposition of symmetrical components, to which three-phase analysis can be applied. In the method of symmetric components, the power system is seen as a superposition of three components: a positive-sequence component, in which the phases are in the same order as the original system, i.e., a-b-c a negative-sequence component, in which the phases are in the opposite order as the original system, i.e., a-c-b a zero-sequence component, which is not truly a three-phase system, but instead all three phases are in phase with each other. To determine the currents resulting from an asymmetric fault, one must first know the per-unit zero-, positive-, and negative-sequence impedances of the transmission lines, generators, and transformers involved. Three separate circuits are then constructed using these impedances. The individual circuits are then connected together in a particular arrangement that depends upon the type of fault being studied (this can be found in most power systems textbooks). Once the sequence circuits are properly connected, the network can then be analyzed using classical circuit analysis techniques. The solution results in voltages and currents that exist as symmetrical components; these must be transformed back into phase values by using the A matrix. Analysis of the prospective short-circuit current is required for selection of protective devices such as fuses and circuit breakers. If a circuit is to be properly protected, the fault current must be high enough to operate the protective device within as short a time as possible; also the protective device must be able to withstand the fault current and extinguish any resulting arcs without itself being destroyed or sustaining the arc for any significant length of time. The magnitude of fault currents differ widely depending on the type of earthing system used, the installation's supply type and earthing system, and its proximity to the supply. For example, for a domestic UK 230 V, 60 A TN-S or USA 120 V/240 V supply, fault currents may be a few thousand amperes. Large low-voltage networks with multiple sources may have fault levels of 300,000 amperes. A high-resistance-grounded system may restrict line to ground fault current to only 5 amperes. Prior to selecting protective devices, prospective fault current must be measured reliably at the origin of the installation and at the furthest point of each circuit, and this information applied properly to the application of the circuits. Detecting and locating faults Overhead power lines are easiest to diagnose since the problem is usually obvious, e.g., a tree has fallen across the line, or a utility pole is broken and the conductors are lying on the ground. Locating faults in a cable system can be done either with the circuit de-energized, or in some cases, with the circuit under power. Fault location techniques can be broadly divided into terminal methods, which use voltages and currents measured at the ends of the cable, and tracer methods, which require inspection along the length of the cable. Terminal methods can be used to locate the general area of the fault, to expedite tracing on a long or buried cable. In very simple wiring systems, the fault location is often found through inspection of the wires. In complex wiring systems (for example, aircraft wiring) where the wires may be hidden, wiring faults are located with a Time-domain reflectometer. The time domain reflectometer sends a pulse down the wire and then analyzes the returning reflected pulse to identify faults within the electrical wire. In historic submarine telegraph cables, sensitive galvanometers were used to measure fault currents; by testing at both ends of a faulted cable, the fault location could be isolated to within a few miles, which allowed the cable to be grappled up and repaired. The Murray loop and the Varley loop were two types of connections for locating faults in cables Sometimes an insulation fault in a power cable will not show up at lower voltages. A "thumper" test set applies a high-energy, high-voltage pulse to the cable. Fault location is done by listening for the sound of the discharge at the fault. While this test contributes to damage at the cable site, it is practical because the faulted location would have to be re-insulated when found in any case. In a high resistance grounded distribution system, a feeder may develop a fault to ground but the system continues in operation. The faulted, but energized, feeder can be found with a ring-type current transformer collecting all the phase wires of the circuit; only the circuit containing a fault to ground will show a net unbalanced current. To make the ground fault current easier to detect, the grounding resistor of the system may be switched between two values so that the fault current pulses. Batteries The prospective fault current of larger batteries, such as deep-cycle batteries used in stand-alone power systems, is often given by the manufacturer. In Australia, when this information is not given, the prospective fault current in amperes "should be considered to be 6 times the nominal battery capacity at the C A·h rate," according to AS 4086 part 2 (Appendix H). See also Electrical safety Fault (technology) References General Power engineering Engineering failures
Electrical fault
Technology,Engineering
2,685
36,806,716
https://en.wikipedia.org/wiki/Fort%20de%20Seclin
The Fort de Seclin, also known as Fort Duhoux, is located near the commune of Seclin, France, about south of Lille. Built from 1873 to 1875, it is part of the Séré de Rivières system of fortifications that France built following the defeat of the Franco-Prussian War. It was never modernized to cope with improvements in artillery technology in the late 19th century. It has been preserved and is interpreted by a local preservation association for the public. Description The Fort de Seclin is trapezoidal in shape, with a central artillery position on top of barracks and support facilities, surrounded by a defended ditch and counterscarp. The ditch is covered by a double caponier and an aileron. The main entry has its own ravelin. The two-level barracks are recessed into courtyards and covered with earth and turf. The fort is constructed in brick and stone masonry and was never upgraded with concrete protection. The fort was initially armed with about forty artillery pieces, served by between 700 and 800 men. It had two satellite batteries. History World War I The fort did not see significant action during World War I, as it was to the rear of Lille and was among the last positions to be overrun. The Germans took over the fort and used it as a supply depot until 1918, when it was occupied by British forces from the 68th Battalion, King's Liverpool Regiment, as they advanced into Lille. World War II The Fort de Seclin saw no significant action in the Battle of France in 1940. The fort was again occupied by the Germans and used as a prison for French Resistance fighters. A total of 69 people were executed at the fort. In the most notable event, seven employees of the French national railways were arrested and accused of sabotage, spying and armed resistance. The six male prisoners were executed by firing squad at the Fort de Seclin on 7 June 1944. A monument commemorates the dead. Following the war it continued in use as an ammunition depot until the French Army abandoned it. Present The Fort de Seclin was purchased by private owners in 1996. In 2003 they opened a museum on the site showing military equipment from the era of the Franco-Prussian War up to the First World War, focusing particularly on artillery and horse-drawn carriages. See also Achille Pierre Deffontaines References Bibliography Les fusillés du Fort de Seclin, Mémorial at Ascq 1944 External links Fort de Seclin website Fort de Seclin – Tourism in Nord-Pas de Calais Fort de Seclin at the de l'Association Internationale des Sites et Musées de la Guerre de 1914–1918 Fort de Seclin at fortiff.be Seclin at Chemins de Mémoire Séré de Rivières system World War I museums in France World War II museums in France
Fort de Seclin
Engineering
571
58,443,580
https://en.wikipedia.org/wiki/DG%20Tauri%20B
DG Tauri B, near the T Tauri star DG Tauri, is a young stellar object located 450 light-years (140 parsecs) from Earth, within the Taurus constellation. Observations of DG Tauri B were first made in October, and later December 1995 at the 6 element Owens Valley millimeter wave array. Its most notable characteristics are its bipolar jets of molecular gas and dust emanating from either side of the object. Red-shifted carbon monoxide emissions extend out 6,000 AU to the northwest of the object from the undetermined source, and are symmetrically distributed about the jet, while blue-shifted CO emissions are confined to a region with a roughly 500 AU radius. References Taurus (constellation) Astronomical objects discovered in 1995 Pre-main-sequence stars Tauri, DG
DG Tauri B
Astronomy
170
60,128,448
https://en.wikipedia.org/wiki/NGC%20694
NGC 694 is a spiral galaxy approximately 136 million light-years away from Earth in the constellation of Aries. It was discovered by German astronomer Heinrich Louis d'Arrest on December 2, 1861 with the 11-inch refractor at Copenhagen. Nearby galaxies NGC 694 is a member of a small galaxy group known as the NGC 691 group, the main other members of which are NGC 680, NGC 691 and NGC 697. IC 167 lies 5.5 arcminutes to the south-southeast. Supernova SN 2014bu Supernova SN 2014bu was discovered in NGC 694 on June 17, 2014 by Berto Monard. SN 2014bu had magnitude about 15.5 and was located at RA 01h50m58.4s, DEC +22d00m00s, J2000.0. It was classified as type II-P supernova. Image gallery See also List of NGC objects (1–1000) References External links SEDS Spiral galaxies Aries (constellation) 694 6816 Astronomical objects discovered in 1861 Discoveries by Heinrich Louis d'Arrest
NGC 694
Astronomy
227
52,136,577
https://en.wikipedia.org/wiki/Skeletocutis%20brunneomarginata
Skeletocutis brunneomarginata is a species of poroid crust fungus in the family Polyporaceae. Found in the United States, it was described as new to science in 2007 by Norwegian mycologist Leif Ryvarden. He collected the type in Bent Creek Experimental Forest, North Carolina in 2004. The fungus is very similar in appearance to Skeletocutis kühneri, but with a brown margin and subiculum. S. brunneomarginata is one of 14 Skeletocutis species that occurs in North America. References Fungi described in 2009 Fungi of the United States brunneomarginata Fungi without expected TNC conservation status Fungus species
Skeletocutis brunneomarginata
Biology
145
20,680,975
https://en.wikipedia.org/wiki/Idealized%20greenhouse%20model
The temperatures of a planet's surface and atmosphere are governed by a delicate balancing of their energy flows. The idealized greenhouse model is based on the fact that certain gases in the Earth's atmosphere, including carbon dioxide and water vapour, are transparent to the high-frequency solar radiation, but are much more opaque to the lower frequency infrared radiation leaving Earth's surface. Thus heat is easily let in, but is partially trapped by these gases as it tries to leave. Rather than get hotter and hotter, Kirchhoff's law of thermal radiation says that the gases of the atmosphere also have to re-emit the infrared energy that they absorb, and they do so, also at long infrared wavelengths, both upwards into space as well as downwards back towards the Earth's surface. In the long-term, the planet's thermal inertia is surmounted and a new thermal equilibrium is reached when all energy arriving on the planet is leaving again at the same rate. In this steady-state model, the greenhouse gases cause the surface of the planet to be warmer than it would be without them, in order for a balanced amount of heat energy to finally be radiated out into space from the top of the atmosphere. Essential features of this model where first published by Svante Arrhenius in 1896. It has since become a common introductory "textbook model" of the radiative heat transfer physics underlying Earth's energy balance and the greenhouse effect. The planet is idealized by the model as being functionally "layered" with regard to a sequence of simplified energy flows, but dimensionless (i.e. a zero-dimensional model) in terms of its mathematical space. The layers include a surface with constant temperature Ts and an atmospheric layer with constant temperature Ta. For diagrammatic clarity, a gap can be depicted between the atmosphere and the surface. Alternatively, Ts could be interpreted as a temperature representative of the surface and the lower atmosphere, and Ta could be interpreted as the temperature of the upper atmosphere, also called the skin temperature. In order to justify that Ta and Ts remain constant over the planet, strong oceanic and atmospheric currents can be imagined to provide plentiful lateral mixing. Furthermore, the temperatures are understood to be multi-decadal averages such that any daily or seasonal cycles are insignificant. Simplified energy flows The model will find the values of Ts and Ta that will allow the outgoing radiative power, escaping the top of the atmosphere, to be equal to the absorbed radiative power of sunlight. When applied to a planet like Earth, the outgoing radiation will be longwave and the sunlight will be shortwave. These two streams of radiation will have distinct emission and absorption characteristics. In the idealized model, we assume the atmosphere is completely transparent to sunlight. The planetary albedo αP is the fraction of the incoming solar flux that is reflected back to space (since the atmosphere is assumed totally transparent to solar radiation, it does not matter whether this albedo is imagined to be caused by reflection at the surface of the planet or at the top of the atmosphere or a mixture). The flux density of the incoming solar radiation is specified by the solar constant S0. For application to planet Earth, appropriate values are S0=1366 W m−2 and αP=0.30. Accounting for the fact that the surface area of a sphere is 4 times the area of its intercept (its shadow), the average incoming radiation is S0/4. For longwave radiation, the surface of the Earth is assumed to have an emissivity of 1 (i.e. it is a black body in the infrared, which is realistic). The surface emits a radiative flux density F according to the Stefan–Boltzmann law: where σ is the Stefan–Boltzmann constant. A key to understanding the greenhouse effect is Kirchhoff's law of thermal radiation. At any given wavelength the absorptivity of the atmosphere will be equal to the emissivity. Radiation from the surface could be in a slightly different portion of the infrared spectrum than the radiation emitted by the atmosphere. The model assumes that the average emissivity (absorptivity) is identical for either of these streams of infrared radiation, as they interact with the atmosphere. Thus, for longwave radiation, one symbol ε denotes both the emissivity and absorptivity of the atmosphere, for any stream of infrared radiation. The infrared flux density out of the top of the atmosphere is computed as: In the last term, ε represents the fraction of upward longwave radiation from the surface that is absorbed, the absorptivity of the atmosphere. The remaining fraction (1-ε) is transmitted to space through an atmospheric window. In the first term on the right, ε is the emissivity of the atmosphere, the adjustment of the Stefan–Boltzmann law to account for the fact that the atmosphere is not optically thick. Thus ε plays the role of neatly blending, or averaging, the two streams of radiation in the calculation of the outward flux density. The energy balance solution Zero net radiation leaving the top of the atmosphere requires: Zero net radiation entering the surface requires: Energy equilibrium of the atmosphere can be either derived from the two above equilibrium conditions, or independently deduced: Note the important factor of 2, resulting from the fact that the atmosphere radiates both upward and downward. Thus the ratio of Ta to Ts is independent of ε: Thus Ta can be expressed in terms of Ts, and a solution is obtained for Ts in terms of the model input parameters: or The solution can also be expressed in terms of the effective emission temperature Te, which is the temperature that characterizes the outgoing infrared flux density F, as if the radiator were a perfect radiator obeying F=σTe4. This is easy to conceptualize in the context of the model. Te is also the solution for Ts, for the case of ε=0, or no atmosphere: With the definition of Te: For a perfect greenhouse, with no radiation escaping from the surface, or ε=1: Application to Earth Using the parameters defined above to be appropriate for Earth, For ε=1: For ε=0.78, . This value of Ts happens to be close to the published 287.2 K of the average global "surface temperature" based on measurements. ε=0.78 implies 22% of the surface radiation escapes directly to space, consistent with the statement of 15% to 30% escaping in the greenhouse effect. The radiative forcing for doubling carbon dioxide is 3.71 W m−2, in a simple parameterization. This is also the value endorsed by the IPCC. From the equation for , Using the values of Ts and Ta for ε=0.78 allows for = -3.71 W m−2 with Δε=.019. Thus a change of ε from 0.78 to 0.80 is consistent with the radiative forcing from a doubling of carbon dioxide. For ε=0.80, Thus this model predicts a global warming of ΔTs = 1.2 K for a doubling of carbon dioxide. A typical prediction from a GCM is 3 K surface warming, primarily because the GCM allows for positive feedback, notably from increased water vapor. A simple surrogate for including this feedback process is to posit an additional increase of Δε=.02, for a total Δε=.04, to approximate the effect of the increase in water vapor that would be associated with an increase in temperature. This idealized model then predicts a global warming of ΔTs = 2.4 K for a doubling of carbon dioxide, roughly consistent with the IPCC. Tabular summary with K, C, and F units Extensions The one-level atmospheric model can be readily extended to a multiple-layer atmosphere. In this case the equations for the temperatures become a series of coupled equations. These simple energy-balance models always predict a decreasing temperature away from the surface, and all levels increase in temperature as "greenhouse gases are added". Neither of these effects are fully realistic: in the real atmosphere temperatures increase above the tropopause, and temperatures in that layer are predicted (and observed) to decrease as GHG's are added. This is directly related to the non-greyness of the real atmosphere. An interactive version of a model with 2 atmospheric layers, and which accounts for convection, is available online. See also Atmospheric model Climate model Planetary equilibrium temperature References Additional bibliography External links Computing wikipedia's idealized greenhouse model Atmospheric radiation Atmospheric sciences Climate variability and change Earth sciences Environmental science Climate modeling
Idealized greenhouse model
Environmental_science
1,763
28,290,110
https://en.wikipedia.org/wiki/Haitian%20border%20threadsnake
The Haitian border threadsnake (Mitophis leptepileptus) is a possibly extinct species of snake in the family Leptotyphlopidae endemic to Haiti. Description Last seen in 1984, the species was thought to be already rare, but intensive surveys in the area have not recorded it. If it is extinct, causes are certainly due to deforestation of its habitat and agricultural activities, which have intensified since its last collection. References Mitophis Reptiles of Haiti Endemic fauna of Haiti Reptiles described in 1985 Species known from a single specimen
Haitian border threadsnake
Biology
114
38,196,037
https://en.wikipedia.org/wiki/The%20Petroleum%20Dictionary
The Petroleum Dictionary is a dictionary covering terms used in the American oil industry. It was compiled by Lalia Phipps Boone and was first published by the University of Oklahoma Press in 1952. Overview The Petroleum Dictionary contains short definitions for around 6,000 terms used in the oil industry in America, with a particular focus on slang. It is intended as a record of the history of these colloquialisms, rather than a reference work for individuals in the petroleum industry. Reception Writing in the Journal of Geology, G. Frederick Shepherd from the General American Oil Company of Texas commented on the incomplete nature of the dictionary, describing it as "an excellent start...but not the end point". As an example, he highlighted the fact that it contains only 68 of the 573 abbreviations listed by Rinehart Oil News Company in a guide to language used in oil reports. He attributed these omissions partly to the fact that the work underwent more detailed reviewing by language specialists rather than industry technicians. Shepherd, however, did praise Boone for her detailed and interesting research into the history of words, and her inclusion of a number of euphemisms, which made the dictionary "remarkable for its freshness and occasional spice". Maurice Merrill, comparing the Dictionary with a similar work entitled Manual of Oil and Gas Terms, noted the absence of legal terms in Boone's work, suggesting that for individuals within the oil industry, the Manual of Oil and Gas Terms was a preferable reference work. In the Southwestern Historical Quarterly, reviewer David Donoghue highlighted several "errors of the inexcusable variety", and suggested that the dictionary had "little to offer" to the "oil fielder who is seriously interested in what makes the business go". References External links Full text of The Petroleum Dictionary at The Internet Archive 1952 non-fiction books Books about petroleum English dictionaries Petroleum industry
The Petroleum Dictionary
Chemistry
382
6,852,964
https://en.wikipedia.org/wiki/FastPath
The Kinetics FastPath was a LocalTalk-to-Ethernet bridge (now referred to as a router) created in 1985 to allow Apple Macintosh computers (which at the time only had LocalTalk network connections) to communicate with other computers on Ethernet networks. The product had five significant revisions (known as KFPS-1 through KFPS-5) during its lifetime and was ultimately sold to Shiva Networks late in its existence. The original FastPath was developed to extend AppleTalk on Ethernet for Apple Computer, but from the beginning it was also modeled after an implementation of the Stanford Ethernet–AppleTalk Gateway (SEAGATE) created at Stanford University Medical Center by Bill Croft in 1984 and 1985. SEAGATE was a combination of hardware and software that picked up IP packets from the Ethernet network and encapsulated them inside of DDP packets on the AppleTalk network and conversely picked up specially-encoded DDP packets on the AppleTalk network and placed them on the Ethernet network as IP packets. Although a few sites used the original SEAGATE multibus hardware as defined, it served as a proof-of-concept and was eclipsed by the Kinetics FastPath and similar hardware gateways by other companies. However, many university and research FastPath owners continued to be able to run the Stanford gateway software (later called KIP incorporating MacIP) inside the Kinetics FastPath. This is because KIP was an open source interface to the Kinetics hardware and local modifications and adaptations could be made. By 1987, Apple had begun shipping Macintosh computers that were capable of having Ethernet connections directly, but the LocalTalk networking products prospered into the early 1990s, due to the popularity of Apple's plug-and-play networking and the continued existence of popular LocalTalk devices such as the LaserWriter. See also GatorBox LocalTalk-to-Ethernet bridge MacIP References External links Information about the Shiva FastPath 5 Networking hardware
FastPath
Technology,Engineering
392
39,242,968
https://en.wikipedia.org/wiki/B%C3%BCchner%E2%80%93Curtius%E2%80%93Schlotterbeck%20reaction
The Buchner–Curtius–Schlotterbeck reaction is the reaction of aldehydes or ketones with aliphatic diazoalkanes to form homologated ketones. It was first described by Eduard Buchner and Theodor Curtius in 1885 and later by Fritz Schlotterbeck in 1907. Two German chemists also preceded Schlotterbeck in discovery of the reaction, Hans von Pechmann in 1895 and Viktor Meyer in 1905. The reaction has since been extended to the synthesis of β-keto esters from the condensation between aldehydes and diazo esters. The general reaction scheme is as follows: The reaction yields two possible carbonyl compounds (I and II) along with an epoxide (III). The ratio of the products is determined by the reactant used and the reaction conditions. Reaction mechanism The general mechanism is shown below. The resonating arrow (1) shows a resonance contributor of the diazo compound with a lone pair of electrons on the carbon adjacent to the nitrogen. The diazo compound then does a nucleophilic attack on the carbonyl-containing compound (nucleophilic addition), producing a tetrahedral intermediate (2). This intermediate decomposes by the evolution of nitrogen gas forming the tertiary carbocation intermediate (3). The reaction is then completed either by the reformation of the carbonyl through an 1,2-rearrangement or by the formation of the epoxide. There are two possible carbonyl products: one formed by migration of R1 (4) and the other by migration of R2 (5). The relative yield of each possible carbonyl is determined by the migratory preferences of the R-groups. The epoxide product is formed by an intramolecular addition reaction in which a lone pair from the oxygen attacks the carbocation (6). This reaction is exothermic due to the stability of nitrogen gas and the carbonyl containing compounds. This specific mechanism is supported by several observations. First, kinetic studies of reactions between diazomethane and various ketones have shown that the overall reaction follows second order kinetics. Additionally, the reactivity of two series of ketones are in the orders Cl3CCOCH3 > CH3COCH3 > C6H5COCH3 and cyclohexanone > cyclopentanone > cycloheptanone > cyclooctanone. These orders of reactivity are the same as those observed for reactions that are well established as proceeding through nucleophilic attack on a carbonyl group. Scope and variation The reaction was originally carried out in diethyl ether and routinely generated high yields due to the inherent irreversibly of the reaction caused by the formation of nitrogen gas. Though these reactions can be carried out at room temperature, the rate does increase at higher temperatures. Typically, the reaction is carried out at less than refluxing temperatures. The optimal reaction temperature is determined by the specific diazoalkane used. Reactions involving diazomethanes with alkyl or aryl substituents are exothermic at or below room temperature. Reactions involving diazomethanes with acyl or aroyl substituents require higher temperatures. The reaction has since been modified to proceed in the presence of Lewis acids and common organic solvents such as THF and dichloromethane. Reactions generally run at room temperature for about an hour, and the yield ranges from 70%-80% based on the choice of Lewis acid and solvent. Steric effects Steric effects of the alkyl substituents on the carbonyl reactant have been shown to affect both the rates and yields of Büchner–Curtius–Schlotterbeck reaction. Table 1 shows the percent yield of the ketone and epoxide products as well as the relative rates of reaction for the reactions between several methyl alkyl ketones and diazomethane. The observed decrease in rate and increase in epoxide yield as the size of the alkyl group becomes larger indicates a steric effect. Electronic effects Ketones and aldehydes with electron-withdrawing substituents react more readily with diazoalkanes than those bearing electron-donating substituents (Table 2). In addition to accelerating the reaction, electron-withdrawing substituents typically increase the amount of epoxide produced (Table 2). The effects of substituents on the diazoalkanes is reversed relative to the carbonyl reactants: electron-withdrawing substituents decrease the rate of reaction while electron-donating substituents accelerate it. For example, diazomethane is significantly more reactive than ethyl diazoacetate, though less reactive than its higher alkyl homologs (e.g. diazoethane). Reaction conditions may also affect the yields of carbonyl product and epoxide product. In the reactions of o-nitrobenzaldehyde, p-nitrobenzaldehyde, and phenylacetaldehyde with diazomethane, the ratio of epoxide to carbonyl is increased by the inclusion of methanol in the reaction mixture. The opposite influence has also been observed in the reaction of piperonal with diazomethane, which exhibits increased carbonyl yield in the presence of methanol. Migratory preferences The ratio of the two possible carbonyl products (I and II) obtained is determined by the relative migratory abilities of the carbonyl substituents (R1 and R2). In general, the R-group most capable of stabilizing the partial positive charge formed during the rearrangement migrates preferentially. A prominent exception to this general rule is hydride shifting. The migratory preferences of the carbonyl R-groups can be heavily influenced by solvent and diazoalkane choice. For example, methanol has been shown to promote aryl migration. As shown below, if the reaction of piperanol (IV) with diazomethane is carried out in the absence of methanol, the ketone obtained though a hydride shift is the major product (V). If methanol is the solvent, an aryl shift occurs to form the aldehyde (VI), which cannot be isolated as it continues to react to form the ketone (VII) and the epoxide (VIII) products. The diazoalkane employed can also determine relative yields of products by influencing migratory preferences, as conveyed by the reactions of o-nitropiperonal with diazomethane and diazoethane. In the reaction between o-nitropiperonal (IX) and diazomethane, an aryl shift leads to production of the epoxide (X) in 9 to 1 excess of the ketone product (XI). When diazoethane is substituted for diazomethane, a hydride shift produces the ketone (XII), the only isolable product. Examples in the literature The Büchner–Curtius–Schlotterbeck reaction can be used to facilitate one carbon ring expansions when the substrate ketone is cyclic. For instance, the reaction of cyclopentanone with diazomethane forms cyclohexanone (shown below). The Büchner ring expansion reactions utilizing diazoalkanes have proven to be synthetically useful as they can not only be used to form 5- and 6-membered rings, but also more unstable 7- and 8-membered rings. An acyl-diazomethane can react with an aldehyde to form a β-diketone in the presence of a transition metal catalyst (SnCl2 in the example shown below). β-Diketones are common biological products, and as such, their synthesis is relevant to biochemical research. Furthermore, the acidic β-hydrogens of β-diketones are useful for broader synthetic purposes, as they can be removed by common bases. Acyl-diazomethane can also add to esters to form β-keto esters, which are important for fatty acid synthesis. As mentioned above, the acidic β-hydrogens also have productive functionality. The Büchner–Curtius–Schlotterbeck reaction can also be used to insert a methylene bridge between a carbonyl carbon and a halogen of an acyl halide. This reaction allows conservation of the carbonyl and halide functionalities. It is possible to isolate nitrogen-containing compounds using the Büchner–Curtius–Schlotterbeck reaction. For example, an acyl-diazomethane can react with an aldehyde in the presence of a DBU catalyst to form isolable α-diazo-β-hydroxy esters (shown below). References Organic reactions Name reactions
Büchner–Curtius–Schlotterbeck reaction
Chemistry
1,836
6,397,823
https://en.wikipedia.org/wiki/C7H8
{{DISPLAYTITLE:C7H8}} The molecular formula C7H8 (molar mass: 92.14 g/mol) may refer to: Cycloheptatriene Isotoluenes Norbornadiene Quadricyclane Toluene, or toluol Molecular formulas
C7H8
Physics,Chemistry
68
507,354
https://en.wikipedia.org/wiki/Glossary%20of%20differential%20geometry%20and%20topology
This is a glossary of terms specific to differential geometry and differential topology. The following three glossaries are closely related: Glossary of general topology Glossary of algebraic topology Glossary of Riemannian and metric geometry. See also: List of differential geometry topics Words in italics denote a self-reference to this glossary. A Atlas B Bundle – see fiber bundle. Basic element – A basic element with respect to an element is an element of a cochain complex (e.g., complex of differential forms on a manifold) that is closed: and the contraction of by is zero. C Characteristic class Chart Cobordism Codimension – The codimension of a submanifold is the dimension of the ambient space minus the dimension of the submanifold. Connected sum Connection Cotangent bundle – the vector bundle of cotangent spaces on a manifold. Cotangent space Covering Cusp CW-complex D Dehn twist Diffeomorphism – Given two differentiable manifolds and , a bijective map from to is called a diffeomorphism – if both and its inverse are smooth functions. Differential form Domain invariance Doubling – Given a manifold with boundary, doubling is taking two copies of and identifying their boundaries. As the result we get a manifold without boundary. E Embedding Exotic structure – See exotic sphere and exotic . F Fiber – In a fiber bundle, the preimage of a point in the base is called the fiber over , often denoted . Fiber bundle Frame – A frame at a point of a differentiable manifold M is a basis of the tangent space at the point. Frame bundle – the principal bundle of frames on a smooth manifold. Flow G Genus Germ Grassmannian bundle Grassmannian manifold H Handle decomposition Hypersurface – A hypersurface is a submanifold of codimension one. I Immersion Integration along fibers Irreducible manifold Isotopy J Jet Jordan curve theorem L Lens space – A lens space is a quotient of the 3-sphere (or (2n + 1)-sphere) by a free isometric action of Z – k. Local diffeomorphism M Manifold – A topological manifold is a locally Euclidean Hausdorff space (usually also required to be second-countable). For a given regularity (e.g. piecewise-linear, or differentiable, real or complex analytic, Lipschitz, Hölder, quasi-conformal...), a manifold of that regularity is a topological manifold whose charts transitions have the prescribed regularity. Manifold with boundary Manifold with corners Mapping class group Morse function N Neat submanifold – A submanifold whose boundary equals its intersection with the boundary of the manifold into which it is embedded. O Orbifold Orientation of a vector bundle P Pair of pants – An orientable compact surface with 3 boundary components. All compact orientable surfaces can be reconstructed by gluing pairs of pants along their boundary components. Parallelizable – A smooth manifold is parallelizable if it admits a smooth global frame. This is equivalent to the tangent bundle being trivial. Partition of unity PL-map Poincaré lemma Principal bundle – A principal bundle is a fiber bundle together with an action on by a Lie group that preserves the fibers of and acts simply transitively on those fibers. Pullback R Rham cohomology S Section Seifert fiber space Submanifold – the image of a smooth embedding of a manifold. Submersion Surface – a two-dimensional manifold or submanifold. Systole – least length of a noncontractible loop. T Tangent bundle – the vector bundle of tangent spaces on a differentiable manifold. Tangent field – a section of the tangent bundle. Also called a vector field. Tangent space Thom space Torus Transversality – Two submanifolds and intersect transversally if at each point of intersection p their tangent spaces and generate the whole tangent space at p of the total manifold. Triangulation Trivialization Tubular neighborhood V Vector bundle – a fiber bundle whose fibers are vector spaces and whose transition functions are linear maps. Vector field – a section of a vector bundle. More specifically, a vector field can mean a section of the tangent bundle. W Whitney sum – A Whitney sum is an analog of the direct product for vector bundles. Given two vector bundles and over the same base their cartesian product is a vector bundle over . The diagonal map induces a vector bundle over called the Whitney sum of these vector bundles and denoted by . Whitney topologies Geometry Wikipedia glossaries using unordered lists
Glossary of differential geometry and topology
Mathematics
943
362,025
https://en.wikipedia.org/wiki/Active%20noise%20control
Active noise control (ANC), also known as noise cancellation (NC), or active noise reduction (ANR), is a method for reducing unwanted sound by the addition of a second sound specifically designed to cancel the first. The concept was first developed in the late 1930s; later developmental work that began in the 1950s eventually resulted in commercial airline headsets with the technology becoming available in the late 1980s. The technology is also used in road vehicles, mobile telephones, earbuds, and headphones. Explanation Sound is a pressure wave, which consists of alternating periods of compression and rarefaction. A noise-cancellation speaker emits a sound wave with the same amplitude but with an inverted phase (also known as antiphase) relative to the original sound. The waves combine to form a new wave, in a process called interference, and effectively cancel each other out – an effect which is called destructive interference. Modern active noise control is generally achieved through the use of analog circuits or digital signal processing. Adaptive algorithms are designed to analyze the waveform of the background aural or nonaural noise, then based on the specific algorithm generate a signal that will either phase shift or invert the polarity of the original signal. This inverted signal (in antiphase) is then amplified and a transducer creates a sound wave directly proportional to the amplitude of the original waveform, creating destructive interference. This effectively reduces the volume of the perceivable noise. A noise-cancellation speaker may be co-located with the sound source to be attenuated. In this case, it must have the same audio power level as the source of the unwanted sound in order to cancel the noise. Alternatively, the transducer emitting the cancellation signal may be located at the location where sound attenuation is wanted (e.g. the user's ear). This requires a much lower power level for cancellation but is effective only for a single user. Noise cancellation at other locations is more difficult as the three-dimensional wavefronts of the unwanted sound and the cancellation signal could match and create alternating zones of constructive and destructive interference, reducing noise in some spots while doubling noise in others. In small enclosed spaces (e.g. the passenger compartment of a car) global noise reduction can be achieved via multiple speakers and feedback microphones, and measurement of the modal responses of the enclosure. Applications Applications can be 1-dimensional or 3-dimensional, depending on the type of zone to protect. Periodic sounds, even complex ones, are easier to cancel than random sounds due to the repetition in the waveform. Protection of a 1-dimension zone is easier and requires only one or two microphones and speakers to be effective. Several commercial applications have been successful: noise-canceling headphones, active mufflers, anti-snoring devices, vocal or center channel extraction for karaoke machines, and the control of noise in air conditioning ducts. The term 1-dimension refers to a simple pistonic relationship between the noise and the active speaker (mechanical noise reduction) or between the active speaker and the listener (headphones). Protection of a 3-dimensional zone requires many microphones and speakers, making it more expensive. Noise reduction is more easily achieved with a single listener remaining stationary but if there are multiple listeners or if the single listener turns their head or moves throughout the space then the noise reduction challenge is made much more difficult. High-frequency waves are difficult to reduce in three dimensions due to their relatively short audio wavelength in air. The wavelength in air of sinusoidal noise at approximately 800 Hz is double the distance of the average person's left ear to the right ear; such a noise coming directly from the front will be easily reduced by an active system but coming from the side will tend to cancel at one ear while being reinforced at the other, making the noise louder, not softer. High-frequency sounds above 1000 Hz tend to cancel and reinforce unpredictably from many directions. In sum, the most effective noise reduction in three-dimensional space involves low-frequency sounds. Commercial applications of 3-D noise reduction include the protection of aircraft cabins and car interiors, but in these situations, protection is mainly limited to the cancellation of repetitive (or periodic) noise such as engine-, propeller- or rotor-induced noise. This is because an engine's cyclic nature makes analysis and noise cancellation easier to apply. Modern mobile phones use a multi-microphone design to cancel out ambient noise from the speech signal. Sound is captured from the microphone(s) furthest from the mouth (the noise signal(s)) and from the one closest to the mouth (the desired signal). The signals are processed to cancel the noise from the desired signal, producing improved voice sound quality. In some cases, noise can be controlled by employing active vibration control. This approach is appropriate when the vibration of a structure produces unwanted noise by coupling the vibration into the surrounding air or water. Active vis-à-vis passive noise control Noise control is an active or passive means of reducing sound emissions, often for personal comfort, environmental considerations, or legal compliance. Active noise control is sound reduction using a power source. Passive noise control is sound reduction by noise-isolating materials such as insulation, sound-absorbing tiles, or a muffler rather than a power source. Active noise canceling is best suited for low frequencies. For higher frequencies, the spacing requirements for free space and zone of silence techniques become prohibitive. In acoustic cavity and duct-based systems, the number of nodes grows rapidly with increasing frequency, which quickly makes active noise control techniques unmanageable. Passive treatments become more effective at higher frequencies and often provide an adequate solution without the need for active control. History The first patent for a noise control system——was granted to inventor Paul Lueg in 1936. The patent described how to cancel sinusoidal tones in ducts by phase-advancing the wave and canceling arbitrary sounds in the region around a loudspeaker by inverting the polarity. In the 1950s Lawrence J. Fogel patented systems to cancel the noise in helicopter and airplane cockpits. In 1957 Willard Meeker developed a working model of active noise control applied to a circumaural earmuff. This headset had an active attenuation bandwidth of approximately 50–500 Hz, with a maximum attenuation of approximately 20 dB. By the late 1980s the first commercially available active noise reduction headsets became available. They could be powered by NiCad batteries or directly from the aircraft power system. See also Active sound design Adaptive noise cancelling Coherence (physics) Noise-canceling microphone Notes References External links BYU physicists quiet fans in computers, office equipment Anti-Noise, Quieting the Environment with Active Noise Cancellation Technology, IEEE Potentials, April 1992 Christopher E. Ruckman's ANC FAQ (This page was created in 1994 and maintained until approximately 2010, but is no longer active.) Waves of Silence: Digisonix, active noise control, and the digital revolution Audio engineering Noise reduction Noise control Loudspeaker technology
Active noise control
Engineering
1,451
179,098
https://en.wikipedia.org/wiki/Racetrack%20%28game%29
Racetrack is a paper and pencil game that simulates a car race, played by two or more players. The game is played on a squared sheet of paper, with a pencil line tracking each car's movement. The rules for moving represent a car with a certain inertia and physical limits on traction, and the resulting line is reminiscent of how real racing cars move. The game requires players to slow down before bends in the track, and requires some foresight and planning for successful play. The game is popular as an educational tool teaching vectors. The game is also known under names such as Vector Formula, Vector Rally, Vector Race, Graph Racers, PolyRace, Paper and pencil racing, or the Graph paper race game. The basic game The rules are here explained in simple terms. As will follow from a later section, if the mathematical concept of vectors is known, some of the rules may be stated more briefly. The rules may also be stated in terms of the physical concepts velocity and acceleration. The track On a sheet of quadrille paper ("quad pad", e.g. Letter preprinted with a 1/4" square grid, or A4 with a 5 mm square grid), a freehand loop is drawn as the outer boundary of the racetrack. A large ellipse will do for a first game, but some irregularities are needed to make the game interesting. Another freehand loop is drawn inside the first. It can be more or less parallel with the outer loop, or the track can have wider and narrower spots (pinch spots), with usually at least two squares between the loops. A straight starting and finishing line is drawn across the two loops, and a direction for the race is chosen (e.g., counter clockwise). Preparing to play The order of players is agreed upon. Each player chooses a color or mark (such as x and o) to represent the player's car. Each player marks a starting point for their car - a grid intersection at or behind the starting line. The moves All moves will be from one grid point to another grid point. Each grid point has eight neighbouring grid points: Up, down, left, right, and the four diagonal directions. Players take turns to move their cars according to some simple rules. Each move is marked by drawing a line from the starting point of this move to a new point. Each player's first move must be to one of the eight neighbours of their starting position. (The player can also choose to stand still.) On each turn after that, the player can choose to move the same number of squares in the same direction as on the previous turn; the grid point reached by this move is called the principal point for this turn. (E.g., if the previous move was four squares to the right and two squares upwards, then the principal point is found by moving another four squares to the right and two more squares upwards.) However, the player also has the choice of any of the eight neighbours of this principal point. Cars must stay within the boundaries of the racetrack; otherwise they crash. Finding a winner The winner is the first player to complete a lap (cross the finish line). Additional and alternative rules Combining the following rules in various ways, there are many variants of the game. The track The track need not be a closed curve; the starting and finishing lines could be different. Before starting to play, the players may go over the track, agreeing in advance about each grid point near the boundaries as to whether that point is inside or outside the track. Alternatively, the track may be drawn with straight lines only, with corners at grid points only. This removes the need to decide dubious points. Players may or may not be allowed to touch the walls, but not to cross them. The moves Instead of allowing moves to any of eight neighbours of the principal point, one may use the four neighbours rule, limiting moves to the principal point or any of its four nearest neighbours. When drawing the track, slippery regions with oil spill may be marked, wherein the cars cannot change velocity at all, or only according to the four neighbours rule. Also, turbo regions may be marked with an arrow with a specific length and direction, wherein possible moves are given by a principal point displaced as indicated by the arrow. These rules may apply to all moves either beginning in, or ending in, or beginning and ending in, or passing through, the marked region. Collisions and crashes Usually, cars are required to stay on the track for the entire length of the move, not just the start and end. On heavily convoluted racetracks, allowing the line segment representing a move to cross the boundary twice (with start and end points inside the track), some unreasonable shortcuts may be allowed. Several cars may be allowed to occupy the same point simultaneously. However, the most common and entertaining rule is that while the line segments are allowed to intersect, a car cannot move to or through a grid point that is occupied by another car, as they would collide. If a player is unable to move according to these rules, the player has crashed. A crashed car may leave the game, or various systems for penalizing crashes can be devised. A player running off the track may be allowed to continue, but is required to brake and turn around, and re-enter the track again crossing the boundary at a point behind the point where it left. At high speeds, this will take a considerable number of moves. Another possibility is to penalize a car with "damage points" for each crash. E.g., if it runs off the track or collides, it receives 1 damage point for each square of the last movement, and comes to an immediate stand-still. A car with 5 damage points, say, cannot run anymore. Finding a winner At the end of the game, one may complete a round. E.g., with three players A, B and C (starting on that order), if B is the first to cross the finish line, C is allowed one more move to complete the A-B-C cycle. The winner is the player whose car is the greatest distance beyond the finish line. If the collision rule mentioned above is used, there is still a considerable advantage in moving first. This may be partially counterbalanced by having the players choose their individual starting points in reverse order. E.g., first C chooses a start point, then B, then A. Then, A makes the first move, followed by B, then C. Another possible rule is to let the loser move first in the next game. Mathematics and physics Each move may be represented by a vector. E.g., a move four squares to the right and two up may be represented by the vector (4,2). The eight neighbour rule allows changing each coordinate of the vector by ±1. E.g., if the previous move was (4,2), the next one may be any of the following nine: (3,3) (4,3) (5,3) (3,2) (4,2) (5,2) (3,1) (4,1) (5,1) If each round represents 1 second and each square represents 1 metre, the vector representing each move is a velocity vector in metres per second. The four neighbour rule allows accelerations up to 1 metre per second squared, and the eight neighbours rule allows accelerations up to metres per second squared. A more realistic maximum acceleration for car racing would be 10 metres per second squared, e.g. corresponding to assuming each round to represent a reaction time of 0.5 seconds, and each square to represent 2.5 metres (using 4 neighbour rule). The speed built up by acceleration can only be reduced at the same rate. This restriction reflects the inertia or momentum of the car. Note that in physics, speeding, braking, and turning right or left all are forms of acceleration, represented by one vector. For a sports car, having the same maximum acceleration without loss of traction in all directions is not unrealistic; see Circle of forces. Note, however, that the circle of forces strictly applies to an individual tyre rather than an entire vehicle, that a slightly elongated ellipse would be more realistic than a circle, and that the theory of traction involving this circle or ellipse is quite simplified. History and contemporary use The origins of the game are unknown, but it certainly existed as early as the 1960s. The rules for the game, and a sample track game was published by Martin Gardner in January 1973 in his "Mathematical Games" column in Scientific American; and it was again described in Car and Driver magazine, in August 1973, page 65. Today, the game is used by math and physics teachers around the world when teaching vectors and kinematics. However, the game has a certain charm of its own, and may be played as a pure recreation. Martin Gardner noted that the game was "virtually unknown" in the United States, and called it "a truly remarkable simulation of automobile racing". He mentions having learned the game from Jürg Nievergelt, "a computer scientist at the University of Illinois who picked it up on a recent trip to Switzerland". Car and Driver described it as having an "almost supernatural" resemblance to actual racing, commenting that "If you enter a turn too rapidly, you will spin. If you "brake" too early, it will take you longer to accelerate out of the turn." Triplanetary was a science fiction rocket ship racing game that was sold commercially between 1973 and 1981. It used similar rules to Racetrack but on a hexagonal grid and with the spaceships being placed in the center of the grid cells rather than at the vertices. The game used a laminated board which could be written on with a grease pencil. References See also Paper soccer Mathematical games Paper-and-pencil games Racing games
Racetrack (game)
Mathematics
2,037
52,366,515
https://en.wikipedia.org/wiki/Miaopai
Miaopai () is a Chinese video sharing and live streaming service with 70 million daily active users. References External links Android (operating system) software IOS software Video software Chinese social networking websites Video hosting
Miaopai
Technology
42
18,781,149
https://en.wikipedia.org/wiki/Massey-Harris%20Model%2020
The Massey-Harris Model 20 was a two-plow type of tractor built by Massey-Harris (later Massey Ferguson) from 1946 to 1948. Introduced to commemorate Massey's 100th anniversary in 1947, the 20 was virtually identical to the earlier Model 81, which had first appeared in 1941. About 8,000 Model 20s were sold, in row crop or standard models, with the choice of gasoline or kerosene (known as tractor vaporising oil, or TVO, in Britain) as fuel. The Model 20 was replaced in 1948 by the Model 22. Pricing With a base price of around C$1450, about C$500 more than the 81, the 20 was competitive with Ford and Ferguson-Brown models of the period. Weight The bare weight without ballast was 3,000 lb (1,350 kg) (some 700 lb {300 kg} less than the contemporary Model 30, which dramatically outsold it, but about 400 lb {180 kg} more than the earlier 81). Engine The 124 in3 (2,031 cc) engine inherited from the 81, and the 101 before it, produced 31 hp (23 kW) at the belt, and was manufactured by Continental, like all Massey Harris tractors at the time. Transmission The 20 offered four speeds (against the 30's five), providing a top speed of 2.5 mph (4 km/h) in first (low) and 13.5 mph (21.6 km/h) in fourth (high). References Sources Further reading Pripps, Robert N. The Big Book of Farm Tractors. Vancouver, BC: Raincoast Books, 2001. . __. The Field Guide to Vintage Farm Tractors. Stillwater, MN: Voyageur Press, 2001. __. Vintage Ford Tractors. Stillwater, MN: Voyageur Press, 2001. Denison, Merrill. Harvest Triumphant: The Story of Massey-Harris. New York: Dodd Mead, 1949. Farnsworth, John. The Massey Legacy. Ipswich, Great Britain: Farming Press, 1997. Gay, Larry. Farm Tractors 1975-1995. Saint Joseph, MI: American Society of Agricultural Engineers, 1995. Wendel, C. H. Massey Tractors. Osceola, WI: Motorbooks International, 1992. Tractors Massey-Harris vehicles
Massey-Harris Model 20
Engineering
471
2,596,238
https://en.wikipedia.org/wiki/Shock%20response%20spectrum
A Shock Response Spectrum (SRS) is a graphical representation of a shock, or any other transient acceleration input, in terms of how a Single Degree Of Freedom (SDOF) system (like a mass on a spring) would respond to that input. The horizontal axis shows the natural frequency of a hypothetical SDOF, and the vertical axis shows the peak acceleration which this SDOF would undergo as a consequence of the shock input. Calculation The most direct and intuitive way to generate an SRS from a shock waveform is the following procedure: Pick a damping ratio (or equivalently, a quality factor Q) for your SRS to be based on; Pick a frequency f, and assume that there is a hypothetical Single Degree of Freedom (SDOF) system with a damped natural frequency of f ; Calculate (by direct time-domain simulation) the maximum instantaneous absolute acceleration experienced by the mass element of your SDOF at any time during (or after) exposure to the shock in question. This acceleration is a; Draw a dot at (f,a); Repeat steps 2–4 for many other values of f, and connect all the dots together into a smooth curve. The resulting plot of peak acceleration vs test system frequency is called a Shock Response Spectrum. It is often plotted with frequency in Hz, and with acceleration in units of g Example application Consider a computer chassis containing three cards with fundamental natural frequencies of f1, f2, and f3. Lab tests have previously confirmed that this system survives a certain shock waveform—say, the shock from dropping the chassis from 2 feet above a hard floor. Now, the customer wants to know whether the system will survive a different shock waveform—say, from dropping the chassis from 4 feet above a carpeted floor. If the SRS of the new shock is lower than the SRS of the old shock at each of the three frequencies f1, f2, and f3, then the chassis is likely to survive the new shock. (It is not, however, guaranteed.) Details and limitations Any transient waveform can be presented as an SRS, but the relationship is not unique; many different transient waveforms can produce the same SRS (something one can take advantage of through a process called "Shock Synthesis"). Due to only tracking the peak instantaneous acceleration the SRS does not contain all the information in the transient waveform from which it was created. Different damping ratios produce different SRSs for the same shock waveform. Zero damping will produce a maximum response. Very high damping produces a very boring SRS: A horizontal line. The level of damping is demonstrated by the "quality factor", Q which can also be thought of transmissibility in sinusoidal vibration case. Relative damping of 5% results in a Q of 10. An SRS plot is incomplete if it doesn't specify the assumed Q value. An SRS is of little use for fatigue-type damage scenarios, as the transform removes information of how many times a peak acceleration (and inferred stress) is reached. The SDOF system model also can be used to characterize the severity of vibrations, with two criteria: the exceeding of characteristic instantaneous stress limits (yield stress, ultimate stress etc.). We then define the extreme response spectrum (ERS), similar to the shock response spectrum; the damage by fatigue following the application of a large number of cycles, thus taking into account the duration of the vibration (Fatigue damage spectrum (FDS)). Like many other useful tools, the SRS is not applicable to significantly non-linear systems. See also Shock data logger Shock detector References Harris, C., Piersol, A., Harris Shock and Vibration Handbook, Fifth Edition, McGraw-Hill, (2002), . Lalanne, C., Mechanical Vibration and Shock Analysis. Volume 2: Mechanical Shock, Second Edition, Wiley, 2009. MIL-STD-810G, Environmental Test Methods and Engineering Guidelines, 2000, sect 516.6 External links FreeSRS, http://freesrs.sourceforge.net/, is a toolbox in the public domain to calculate SRS. Mechanical vibrations
Shock response spectrum
Physics,Engineering
862
10,088,964
https://en.wikipedia.org/wiki/Remote%20graphics%20unit
A remote graphics unit (RGU) is a device that allows a computer to be separated from some input/output devices such as keyboard, mouse, speakers, and display monitors. The key part being remoted is the graphics sub-system of the computer. History RGUs may have their origin with experiments with graphics controllers on mainframe computers in the 1970s. RGUs have been mostly associated with high end workstations running Unix-like operating systems or Windows since the late 1990s. Generally RGUs are used for special applications like remote sensing, financial services commodity trading desks, computer-aided design, etc. Depending on how one chooses to define RGUs, dedicated X terminals may also be included. Application Usually the reasons that might lie behind the desire to separate the user interface of a computer from the actual computer itself would be: securing computers away from users for corporate or government security, to reduce heat and noise in rooms with many computer operators, or to facilitate computer maintenance by placing all computers in very close proximity to one another. KVM interoperability Unlike other technologies used to achieve this, such as KVM Extension (or Remote KVM) and DVI Extension for example, a remote graphics unit will effectively split a computer's PCI or PCI-Express bus and transmit only bus commands over to the user side. With KVM Extension and DVI Extension, the graphics processing is done by a traditional graphics processing unit (GPU) on the computer side. Bus data is much smaller than rendered graphics data so the theory behind the remote graphics unit is that it is possible to achieve higher resolutions and better graphics performance when there is a large separation in distance between the user-side input/output devices and the computer side. Examples An example of a product line that was commercialized using RGU as the description of the technology is the Matrox Extio series. Extio is a brand that is marketing shorthand for "External I/O". Related products Other products supporting the concept behind the remote graphics unit include bus extension technologies where a standard graphics processing unit is plugged into a remote PCI slot via a standard graphics add-in card. Various types of bus extension technologies are available including the DeTwo System from Amulet Hotkey as well as products from Avocent. Graphics hardware Computer peripherals
Remote graphics unit
Technology
471
31,911,164
https://en.wikipedia.org/wiki/Deferribacter
Deferribacter is a genus in the phylum Deferribacterota (Bacteria). Etymology The name Deferribacter derives from:Latin pref. de-, from; Latin noun ferrum, iron; Neo-Latin masculine gender noun, a rodbacter, nominally meaning "a rod", but in effect meaning a bacterium, rod; Neo-Latin masculine gender noun Deferribacter, rod that reduces iron. Species The genus contains 4 species, namely D. abyssi Miroshnichenko et al. 2003; (Latin genitive case noun abyssi, of immense depths, living in the depths of the ocean.) D. autotrophicus Slobodkina et al. 2009; (Neo-Latin masculine gender adjective autotrophicus, autotrophic.) D. desulfuricans Takai et al. 2003; (Neo-Latin participle adjective desulfuricans, reducing sulfur.) D. thermophilus Greene et al. 1997 ((Type species of the genus).; Greek noun thermē (θέρμη), heat; Neo-Latin masculine gender adjective philus (from Greek masculine gender adjective φίλος), friend, loving; Neo-Latin masculine gender adjective thermophilus, heat loving.) Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) See also Bacterial taxonomy Microbiology List of bacterial orders List of bacteria genera References Bacteria genera Deferribacterota
Deferribacter
Biology
342
25,603
https://en.wikipedia.org/wiki/Rhenium
Rhenium is a chemical element; it has symbol Re and atomic number 75. It is a silvery-gray, heavy, third-row transition metal in group 7 of the periodic table. With an estimated average concentration of 1 part per billion (ppb), rhenium is one of the rarest elements in the Earth's crust. It has one of the highest melting and boiling points of any element. It resembles manganese and technetium chemically and is mainly obtained as a by-product of the extraction and refinement of molybdenum and copper ores. It shows in its compounds a wide variety of oxidation states ranging from −1 to +7. Rhenium was originally discovered in 1908 by Masataka Ogawa, but he mistakenly assigned it as element 43 rather than element 75 and named it nipponium. It was rediscovered in 1925 by Walter Noddack, Ida Tacke and Otto Berg, who gave it its present name. It was named after the river Rhine in Europe, from which the earliest samples had been obtained and worked commercially. Nickel-based superalloys of rhenium are used in combustion chambers, turbine blades, and exhaust nozzles of jet engines. These alloys contain up to 6% rhenium, making jet engine construction the largest single use for the element. The second-most important use is as a catalyst: it is an excellent catalyst for hydrogenation and isomerization, and is used for example in catalytic reforming of naphtha for use in gasoline (rheniforming process). Because of the low availability relative to demand, rhenium is expensive, with price reaching an all-time high in 2008–09 of US$10,600 per kilogram (US$4,800 per pound). As of 2018, its price had dropped to US$2,844 per kilogram (US$1,290 per pound) due to increased recycling and a drop in demand for rhenium catalysts. History In 1908, Japanese chemist Masataka Ogawa announced that he had discovered the 43rd element and named it nipponium (Np) after Japan (Nippon in Japanese). In fact, he had found element 75 (rhenium) instead of element 43: both elements are in the same group of the periodic table. Ogawa's work was often incorrectly cited, because some of his key results were published only in Japanese; it is likely that his insistence on searching for element 43 prevented him from considering that he might have found element 75 instead. Just before Ogawa's death in 1930, Kenjiro Kimura analysed Ogawa's sample by X-ray spectroscopy at the Imperial University of Tokyo, and said to a friend that "it was beautiful rhenium indeed". He did not reveal this publicly, because under the Japanese university culture before World War II it was frowned upon to point out the mistakes of one's seniors, but the evidence became known to some Japanese news media regardless. As time passed with no repetitions of the experiments or new work on nipponium, Ogawa's claim faded away. The symbol Np was later used for the element neptunium, and the name "nihonium", also named after Japan, along with symbol Nh, was later used for element 113. Element 113 was also discovered by a team of Japanese scientists and was named in respectful homage to Ogawa's work. Today, Ogawa's claim is widely accepted as having been the discovery of element 75 in hindsight. Rhenium ( meaning: "Rhine") received its current name when it was rediscovered by Walter Noddack, Ida Noddack, and Otto Berg in Germany. In 1925 they reported that they had detected the element in platinum ore and in the mineral columbite. They also found rhenium in gadolinite and molybdenite. In 1928 they were able to extract 1 g of the element by processing 660 kg of molybdenite. It was estimated in 1968 that 75% of the rhenium metal in the United States was used for research and the development of refractory metal alloys. It took several years from that point before the superalloys became widely used. The original mischaracterization by Ogawa in 1908 and final work in 1925 makes rhenium perhaps the last stable element to be understood. Hafnium was discovered in 1923 and all other new elements discovered since then, such as francium, are radioactive. Characteristics Rhenium is a silvery-white metal with one of the highest melting points of all elements, exceeded by only tungsten. (At standard pressure carbon sublimes rather than melts, though its sublimation point is comparable to the melting points of tungsten and rhenium.) It also has one of the highest boiling points of all elements, and the highest among stable elements. It is also one of the densest, exceeded only by platinum, iridium and osmium. Rhenium has a hexagonal close-packed crystal structure. Its usual commercial form is a powder, but this element can be consolidated by pressing and sintering in a vacuum or hydrogen atmosphere. This procedure yields a compact solid having a density above 90% of the density of the metal. When annealed this metal is very ductile and can be bent, coiled, or rolled. Rhenium-molybdenum alloys are superconductive at 10 K; tungsten-rhenium alloys are also superconductive around 4–8 K, depending on the alloy. Rhenium metal superconducts at . In bulk form and at room temperature and atmospheric pressure, the element resists alkalis, sulfuric acid, hydrochloric acid, nitric acid, and aqua regia. It will however, react with nitric acid upon heating. Isotopes Rhenium has one stable isotope, rhenium-185, which nevertheless occurs in minority abundance, a situation found only in two other elements (indium and tellurium). Naturally occurring rhenium is only 37.4% 185Re, and 62.6% 187Re, which is unstable but has a very long half-life (~1010 years). A kilogram of natural rhenium emits 1.07 MBq of radiation due to the presence of this isotope. This lifetime can be greatly affected by the charge state of the rhenium atom. The beta decay of 187Re is used for rhenium–osmium dating of ores. The available energy for this beta decay (2.6 keV) is the second lowest known among all radionuclides, only behind the decay from 115In to excited 115Sn* (0.147 keV). The isotope rhenium-186m is notable as being one of the longest lived metastable isotopes with a half-life of around 200,000 years. There are 33 other unstable isotopes that have been recognized, ranging from 160Re to 194Re, the longest-lived of which is 183Re with a half-life of 70 days. Compounds Rhenium compounds are known for all the oxidation states between −3 and +7 except −2. The oxidation states +7, +4, and +3 are the most common. Rhenium is most available commercially as salts of perrhenate, including sodium and ammonium perrhenates. These are white, water-soluble compounds. Tetrathioperrhenate anion [ReS4]− is possible. Halides and oxyhalides The most common rhenium chlorides are ReCl6, ReCl5, ReCl4, and ReCl3. The structures of these compounds often feature extensive Re-Re bonding, which is characteristic of this metal in oxidation states lower than VII. Salts of [Re2Cl8]2− feature a quadruple metal-metal bond. Although the highest rhenium chloride features Re(VI), fluorine gives the d0 Re(VII) derivative rhenium heptafluoride. Bromides and iodides of rhenium are also well known, including rhenium pentabromide and rhenium tetraiodide. Like tungsten and molybdenum, with which it shares chemical similarities, rhenium forms a variety of oxyhalides. The oxychlorides are most common, and include ReOCl4, ReOCl3. Oxides and sulfides The most common oxide is the volatile yellow Re2O7. The red rhenium trioxide ReO3 adopts a perovskite-like structure. Other oxides include Re2O5, ReO2, and Re2O3. The sulfides are ReS2 and Re2S7. Perrhenate salts can be converted to tetrathioperrhenate by the action of ammonium hydrosulfide. Other compounds Rhenium diboride (ReB2) is a hard compound having a hardness similar to that of tungsten carbide, silicon carbide, titanium diboride or zirconium diboride. Organorhenium compounds Dirhenium decacarbonyl is the most common entry to organorhenium chemistry. Its reduction with sodium amalgam gives Na[Re(CO)5] with rhenium in the formal oxidation state −1. Dirhenium decacarbonyl can be oxidised with bromine to bromopentacarbonylrhenium(I): Re2(CO)10 + Br2 → 2 Re(CO)5Br Reduction of this pentacarbonyl with zinc and acetic acid gives pentacarbonylhydridorhenium: Re(CO)5Br + Zn + HOAc → Re(CO)5H + ZnBr(OAc) Methylrhenium trioxide ("MTO"), CH3ReO3 is a volatile, colourless solid that has been used as a catalyst in some laboratory experiments. It can be prepared by many routes, a typical method is the reaction of Re2O7 and tetramethyltin: Re2O7 + (CH3)4Sn → CH3ReO3 + (CH3)3SnOReO3 Analogous alkyl and aryl derivatives are known. MTO catalyses for the oxidations with hydrogen peroxide. Terminal alkynes yield the corresponding acid or ester, internal alkynes yield diketones, and alkenes give epoxides. MTO also catalyses the conversion of aldehydes and diazoalkanes into an alkene. Nonahydridorhenate A distinctive derivative of rhenium is nonahydridorhenate, originally thought to be the rhenide anion, Re−, but actually containing the anion in which the oxidation state of rhenium is +7. Occurrence Rhenium is one of the rarest elements in Earth's crust with an average concentration of 1 ppb; other sources quote the number of 0.5 ppb making it the 77th most abundant element in Earth's crust. Rhenium is probably not found free in nature (its possible natural occurrence is uncertain), but occurs in amounts up to 0.2% in the mineral molybdenite (which is primarily molybdenum disulfide), the major commercial source, although single molybdenite samples with up to 1.88% have been found. Chile has the world's largest rhenium reserves, part of the copper ore deposits, and was the leading producer as of 2005. It was only recently (in 1994) that the first rhenium mineral was found and described, a rhenium sulfide mineral (ReS2) condensing from a fumarole on Kudriavy volcano, Iturup island, in the Kuril Islands. Kudriavy discharges up to 20–60 kg rhenium per year mostly in the form of rhenium disulfide. Named rheniite, this rare mineral commands high prices among collectors. Production Approximately 80% of rhenium is extracted from porphyry molybdenum deposits. Some ores contain 0.001% to 0.2% rhenium. Roasting the ore volatilizes rhenium oxides. Rhenium(VII) oxide and perrhenic acid readily dissolve in water; they are leached from flue dusts and gasses and extracted by precipitating with potassium or ammonium chloride as the perrhenate salts, and purified by recrystallization. Total world production is between 40 and 50 tons/year; the main producers are in Chile, the United States, Peru, and Poland. Recycling of used Pt-Re catalyst and special alloys allow the recovery of another 10 tons per year. Prices for the metal rose rapidly in early 2008, from $1000–$2000 per kg in 2003–2006 to over $10,000 in February 2008. The metal form is prepared by reducing ammonium perrhenate with hydrogen at high temperatures: 2 NH4ReO4 + 7 H2 → 2 Re + 8 H2O + 2 NH3 There are technologies for the associated extraction of rhenium from productive solutions of underground leaching of uranium ores. Applications Rhenium is added to high-temperature superalloys that are used to make jet engine parts, using 70% of the worldwide rhenium production. Another major application is in platinum–rhenium catalysts, which are primarily used in making lead-free, high-octane gasoline. Alloys The nickel-based superalloys have improved creep strength with the addition of rhenium. The alloys normally contain 3% or 6% of rhenium. Second-generation alloys contain 3%; these alloys were used in the engines for the F-15 and F-16, whereas the newer single-crystal third-generation alloys contain 6% of rhenium; they are used in the F-22 and F-35 engines. Rhenium is also used in the superalloys, such as CMSX-4 (2nd gen) and CMSX-10 (3rd gen) that are used in industrial gas turbine engines like the GE 7FA. Rhenium can cause superalloys to become microstructurally unstable, forming undesirable topologically close packed (TCP) phases. In 4th- and 5th-generation superalloys, ruthenium is used to avoid this effect. Among others the new superalloys are EPM-102 (with 3% Ru) and TMS-162 (with 6% Ru), as well as TMS-138 and TMS-174. For 2006, the consumption is given as 28% for General Electric, 28% Rolls-Royce plc and 12% Pratt & Whitney, all for superalloys, whereas the use for catalysts only accounts for 14% and the remaining applications use 18%. In 2006, 77% of rhenium consumption in the United States was in alloys. The rising demand for military jet engines and the constant supply made it necessary to develop superalloys with a lower rhenium content. For example, the newer CFM International CFM56 high-pressure turbine (HPT) blades will use Rene N515 with a rhenium content of 1.5% instead of Rene N5 with 3%. Rhenium improves the properties of tungsten. Tungsten-rhenium alloys are more ductile at low temperature, allowing them to be more easily machined. The high-temperature stability is also improved. The effect increases with the rhenium concentration, and therefore tungsten alloys are produced with up to 27% of Re, which is the solubility limit. Tungsten-rhenium wire was originally created in efforts to develop a wire that was more ductile after recrystallization. This allows the wire to meet specific performance objectives, including superior vibration resistance, improved ductility, and higher resistivity. One application for the tungsten-rhenium alloys is X-ray sources. The high melting point of both elements, together with their high atomic mass, makes them stable against the prolonged electron impact. Rhenium tungsten alloys are also applied as thermocouples to measure temperatures up to 2200 °C. The high temperature stability, low vapor pressure, good wear resistance and ability to withstand arc corrosion of rhenium are useful in self-cleaning electrical contacts. In particular, the discharge that occurs during electrical switching oxidizes the contacts. However, rhenium oxide Re2O7 is volatile (sublimes at ~360 °C) and therefore is removed during the discharge. Rhenium has a high melting point and a low vapor pressure similar to tantalum and tungsten. Therefore, rhenium filaments exhibit a higher stability if the filament is operated not in vacuum, but in oxygen-containing atmosphere. Those filaments are widely used in mass spectrometers, ion gauges and photoflash lamps in photography. Catalysts Rhenium in the form of rhenium-platinum alloy is used as catalyst for catalytic reforming, which is a chemical process to convert petroleum refinery naphthas with low octane ratings into high-octane liquid products. Worldwide, 30% of catalysts used for this process contain rhenium. The olefin metathesis is the other reaction for which rhenium is used as catalyst. Normally Re2O7 on alumina is used for this process. Rhenium catalysts are very resistant to chemical poisoning from nitrogen, sulfur and phosphorus, and so are used in certain kinds of hydrogenation reactions. Other uses The isotopes 186Re and 188Re are radioactive and are used for treatment of liver cancer. They both have similar penetration depth in tissue (5 mm for 186Re and 11 mm for 188Re), but 186Re has the advantage of a longer half life (90 hours vs. 17 hours). 188Re is also being used experimentally in a novel treatment of pancreatic cancer where it is delivered by means of the bacterium Listeria monocytogenes. The 188Re isotope is also used for the rhenium-SCT (skin cancer therapy). The treatment uses the isotope's properties as a beta emitter for brachytherapy in the treatment of basal cell carcinoma and squamous cell carcinoma of the skin. Related by periodic trends, rhenium has a similar chemistry to that of technetium; work done to label rhenium onto target compounds can often be translated to technetium. This is useful for radiopharmacy, where it is difficult to work with technetium – especially the technetium-99m isotope used in medicine – due to its expense and short half-life. Rhenium is used in manufacturing high precision equipment like gyroscopes. Its high density, mechanical stability and corrosion resistance characteristics ensure the equipment's durability and precise performance in demanding conditions. Rhenium cathodes are also used for their stability and precision in spectral analysis. Rhenium is used in aerospace, nuclear, and electronic industries, and it shows potential for application in medical instrumentation. In the rocket industry, it is used in engine components for booster rockets. Additionally, rhenium was employed in the SP-100 program due to its low-temperature ductility. Rhenium's stiffness and high melting point makes it a common gasket material for high pressure experiments in diamond anvil cells. Precautions Very little is known about the toxicity of rhenium and its compounds because they are used in very small amounts. Soluble salts, such as the rhenium halides or perrhenates, could be hazardous due to elements other than rhenium or due to rhenium itself. Only a few compounds of rhenium have been tested for their acute toxicity; two examples are potassium perrhenate and rhenium trichloride, which were injected as a solution into rats. The perrhenate had an LD50 value of 2800 mg/kg after seven days (this is very low toxicity, similar to that of table salt) and the rhenium trichloride showed LD50 of 280 mg/kg. Notes References Further reading External links Rhenium at The Periodic Table of Videos (University of Nottingham) Chemical elements Transition metals Noble metals Refractory metals Chemical elements predicted by Dmitri Mendeleev Chemical elements with hexagonal close-packed structure Native element minerals
Rhenium
Physics,Chemistry
4,293
6,022,800
https://en.wikipedia.org/wiki/Hendrik%20Kloosterman
Hendrik Douwe Kloosterman (9 April 1900 – 6 May 1968) was a Dutch mathematician, known for his work in number theory (in particular, for introducing Kloosterman sums) and in representation theory. After completing his master's degree at Leiden University from 1918–1922 he studied at the University of Copenhagen with Harald Bohr and the University of Oxford with G. H. Hardy. In 1924, he received his Ph.D. in Leiden under the supervision of J. C. Kluyver. From 1926 to 1928 he studied at the Universities of Göttingen and Hamburg, and he was an assistant at the University of Münster from 1928-1930. Kloosterman was appointed lector (associate professorship) at Leiden University in 1930 and full professor in 1947. In 1950, he was elected a member of the Royal Netherlands Academy of Arts and Sciences. References External links 1900 births 1968 deaths 20th-century Dutch mathematicians Leiden University alumni Academic staff of Leiden University Members of the Royal Netherlands Academy of Arts and Sciences Number theorists People from Smallingerland University of Hamburg alumni
Hendrik Kloosterman
Mathematics
222
23,652,040
https://en.wikipedia.org/wiki/WaveLab%20%28mathematics%20software%29
WaveLab is a collection of MATLAB functions for wavelet analysis. Following the success of WaveLab package, there is now the availability of CurveLab and ShearLab. Wavelets
WaveLab (mathematics software)
Mathematics
38
32,678
https://en.wikipedia.org/wiki/Vocoder
A vocoder (, a portmanteau of voice and encoder) is a category of speech coding that analyzes and synthesizes the human voice signal for audio data compression, multiplexing, voice encryption or voice transformation. The vocoder was invented in 1938 by Homer Dudley at Bell Labs as a means of synthesizing human speech. This work was developed into the channel vocoder which was used as a voice codec for telecommunications for speech coding to conserve bandwidth in transmission. By encrypting the control signals, voice transmission can be secured against interception. Its primary use in this fashion is for secure radio communication. The advantage of this method of encryption is that none of the original signal is sent, only envelopes of the bandpass filters. The receiving unit needs to be set up in the same filter configuration to re-synthesize a version of the original signal spectrum. The vocoder has also been used extensively as an electronic musical instrument. The decoder portion of the vocoder, called a voder, can be used independently for speech synthesis. Theory The human voice consists of sounds generated by the periodic opening and closing of the glottis by the vocal cords, which produces an acoustic waveform with many harmonics. This initial sound is then filtered by movements in the nose, mouth and throat (a complicated resonant piping system known as the vocal tract) to produce fluctuations in harmonic content (formants) in a controlled way, creating the wide variety of sounds used in speech. There is another set of sounds, known as the unvoiced and plosive sounds, which are created or modified by a variety of sound generating disruptions of airflow occurring in the vocal tract. The vocoder analyzes speech by measuring how its spectral energy distribution characteristics fluctuate across time. This analysis results in a set of temporally parallel envelope signals, each representing the individual frequency band amplitudes of the user's speech. Put another way, the voice signal is divided into a number of frequency bands (the larger this number, the more accurate the analysis) and the level of signal present at each frequency band, occurring simultaneously, measured by an envelope follower, represents the spectral energy distribution across time. This set of envelope amplitude signals is called the "modulator". To recreate speech, the vocoder reverses the analysis process, variably filtering an initial broadband noise (referred to alternately as the "source" or "carrier"), by passing it through a set of band-pass filters, whose individual envelope amplitude levels are controlled, in real time, by the set of analyzed envelope amplitude signals from the modulator. The digital encoding process involves a periodic analysis of each of the modulator's multiband set of filter envelope amplitudes. This analysis results in a set of digital pulse code modulation stream readings. Then the pulse code modulation stream outputs of each band are transmitted to a decoder. The decoder applies the pulse code modulations as control signals to corresponding amplifiers of the output filter channels. Information about the fundamental frequency of the initial voice signal (as distinct from its spectral characteristic) is discarded; it was not important to preserve this for the vocoder's original use as an encryption aid. It is this dehumanizing aspect of the vocoding process that has made it useful in creating special voice effects in popular music and audio entertainment. Instead of a point-by-point recreation of the waveform, the vocoder process sends only the parameters of the vocal model over the communication link. Since the parameters change slowly compared to the original speech waveform, the bandwidth required to transmit speech can be reduced. This allows more speech channels to utilize a given communication channel, such as a radio channel or a submarine cable. Analog vocoders typically analyze an incoming signal by splitting the signal into multiple tuned frequency bands or ranges. To reconstruct the signal, a carrier signal is sent through a series of these tuned band-pass filters. In the example of a typical robot voice the carrier is noise or a sawtooth waveform. There are usually between 8 and 20 bands. The amplitude of the modulator for each of the individual analysis bands generates a voltage that is used to control amplifiers for each of the corresponding carrier bands. The result is that frequency components of the modulating signal are mapped onto the carrier signal as discrete amplitude changes in each of the frequency bands. Often there is an unvoiced band or sibilance channel. This is for frequencies that are outside the analysis bands for typical speech but are still important in speech. Examples are words that start with the letters s, f, ch or any other sibilant sound. Using this band produces recognizable speech, although somewhat mechanical sounding. Vocoders often include a second system for generating unvoiced sounds, using a noise generator instead of the fundamental frequency. This is mixed with the carrier output to increase clarity. In the channel vocoder algorithm, among the two components of an analytic signal, considering only the amplitude component and simply ignoring the phase component tends to result in an unclear voice; on methods for rectifying this, see phase vocoder. History The development of a vocoder was started in 1928 by Bell Labs engineer Homer Dudley, who was granted patents for it on March 21, 1939, and Nov 16, 1937. To demonstrate the speech synthesis ability of its decoder section, the voder (voice operating demonstrator) was introduced to the public at the AT&T building at the 1939–1940 New York World's Fair. The voder consisted of an electronic oscillator a sound source of pitched tone and noise generator for hiss, a 10-band resonator filters with variable-gain amplifiers as a vocal tract, and the manual controllers including a set of pressure-sensitive keys for filter control, and a foot pedal for pitch control of tone. The filters controlled by keys convert the tone and the hiss into vowels, consonants, and inflections. This was a complex machine to operate, but a skilled operator could produce recognizable speech. Dudley's vocoder was used in the SIGSALY system, which was built by Bell Labs engineers in 1943. SIGSALY was used for encrypted voice communications during World War II. The KO-6 voice coder was released in 1949 in limited quantities; it was a close approximation to the SIGSALY at . In 1953, KY-9 THESEUS voice coder used solid-state logic to reduce the weight to from SIGSALY's , and in 1961 the HY-2 voice coder, a 16-channel system, weighed and was the last implementation of a channel vocoder in a secure speech system. Later work in this field has since used digital speech coding. The most widely used speech coding technique is linear predictive coding (LPC). Another speech coding technique, adaptive differential pulse-code modulation (ADPCM), was developed by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973. Applications Terminal equipment for systems based on digital mobile radio (DMR). Digital voice scrambling and encryption Cochlear implants: noise and tone vocoding is used to simulate the effects of cochlear implants. Musical and other artistic effects Modern implementations Even with the need to record several frequencies, and additional unvoiced sounds, the compression of vocoder systems is impressive. Standard speech-recording systems capture frequencies from about 500 to 3,400 Hz, where most of the frequencies used in speech lie, typically using a sampling rate of 8 kHz (slightly greater than the Nyquist rate). The sampling resolution is typically 8 or more bits per sample resolution, for a data rate in the range of , but a good vocoder can provide a reasonably good simulation of voice with as little as of data. Toll quality voice coders, such as ITU G.729, are used in many telephone networks. G.729 in particular has a final data rate of with superb voice quality. G.723 achieves slightly worse quality at data rates of 5.3 and . Many voice vocoder systems use lower data rates, but below voice quality begins to drop rapidly. Several vocoder systems are used in NSA encryption systems: LPC-10, FIPS Pub 137, , which uses linear predictive coding Code-excited linear prediction (CELP), 2400 and , Federal Standard 1016, used in STU-III Continuously variable slope delta modulation (CVSD), , used in wide band encryptors such as the KY-57. Mixed-excitation linear prediction (MELP), MIL STD 3005, , used in the Future Narrowband Digital Terminal FNBDT, NSA's 21st century secure telephone. Adaptive Differential Pulse Code Modulation (ADPCM), former ITU-T G.721, used in STE secure telephone Modern vocoders that are used in communication equipment and in voice storage devices today are based on the following algorithms: Algebraic code-excited linear prediction (ACELP 4.7–24 kbit/s) Mixed-excitation linear prediction (MELPe 2400, 1200 and ) Multi-band excitation (AMBE  – ) Sinusoidal-Pulsed Representation (SPR  – ) Robust Advanced Low-complexity Waveform Interpolation (RALCWI 2050, 2400 and ) Tri-Wave Excited Linear Prediction (TWELP 300–9600 bit/s) Noise Robust Vocoder (NRV 300 and ) Vocoders are also currently used in psychophysics, linguistics, computational neuroscience and cochlear implant research. Linear prediction-based Since the late 1970s, most non-musical vocoders have been implemented using linear prediction, whereby the target signal's spectral envelope (formant) is estimated by an all-pole IIR filter. In linear prediction coding, the all-pole filter replaces the bandpass filter bank of its predecessor and is used at the encoder to whiten the signal (i.e., flatten the spectrum) and again at the decoder to re-apply the spectral shape of the target speech signal. One advantage of this type of filtering is that the location of the linear predictor's spectral peaks is entirely determined by the target signal, and can be as precise as allowed by the time period to be filtered. This is in contrast with vocoders realized using fixed-width filter banks, where the location of spectral peaks is constrained by the available fixed frequency bands. LP filtering also has disadvantages in that signals with a large number of constituent frequencies may exceed the number of frequencies that can be represented by the linear prediction filter. This restriction is the primary reason that LP coding is almost always used in tandem with other methods in high-compression voice coders. Waveform-interpolative Waveform-interpolative (WI) vocoder was developed at AT&T Bell Laboratories around 1995 by W.B. Kleijn, and subsequently, a low- complexity version was developed by AT&T for the DoD secure vocoder competition. Notable enhancements to the WI coder were made at the University of California, Santa Barbara. AT&T holds the core patents related to WI and other institutes hold additional patents. Artistic effects Uses in music For musical applications, a source of musical sounds is used as the carrier, instead of extracting the fundamental frequency. For instance, one could use the sound of a synthesizer as the input to the filter bank, a technique that became popular in the 1970s. History Werner Meyer-Eppler, a German scientist with a special interest in electronic voice synthesis, published a thesis in 1948 on electronic music and speech synthesis from the viewpoint of sound synthesis. Later he was instrumental in the founding of the Studio for Electronic Music of WDR in Cologne, in 1951. One of the first attempts to use a vocoder in creating music was the Siemens Synthesizer at the Siemens Studio for Electronic Music, developed between 1956 and 1959. In 1968, Robert Moog developed one of the first solid-state musical vocoders for the electronic music studio of the University at Buffalo. In 1968, Bruce Haack built a prototype vocoder, named Farad after Michael Faraday. It was first featured on "The Electronic Record For Children" released in 1969 and then on his rock album The Electric Lucifer released in 1970. In 1970, Wendy Carlos and Robert Moog built another musical vocoder, a ten-band device inspired by the vocoder designs of Homer Dudley. It was originally called a spectrum encoder-decoder and later referred to simply as a vocoder. The carrier signal came from a Moog modular synthesizer, and the modulator from a microphone input. The output of the ten-band vocoder was fairly intelligible but relied on specially articulated speech. In 1972, Isao Tomita's first electronic music album Electric Samurai: Switched on Rock was an early attempt at applying speech synthesis technique to electronic rock. The album featured electronic renditions of contemporary rock and pop songs, while utilizing synthesized voices in place of human voices. In 1974, he utilized synthesized voices in his popular classical music album Snowflakes are Dancing, which became a worldwide success and helped to popularize electronic music. In 1973, the British band Emerson, Lake and Palmer used a vocoder on their album Brain Salad Surgery, for the song "Karn Evil 9: 3rd Impression". The 1975 song "The Raven" from the album Tales of Mystery and Imagination by The Alan Parsons Project features Alan Parsons performing vocals through an EMI vocoder. According to the album's liner notes, "The Raven" was the first rock song to feature a digital vocoder. Pink Floyd used a vocoder on three of their albums, first on their 1977 Animals for the songs "Sheep" and "Pigs (Three Different Ones)", then in 1987 on A Momentary Lapse of Reason on "A New Machine Part 1" and "A New Machine Part 2", and finally on 1994's The Division Bell, on "Keep Talking". The Electric Light Orchestra was among the first to use the vocoder in a commercial context, with their 1977 album Out of the Blue. The band extensively uses it on the album, including on the hits "Sweet Talkin' Woman" and "Mr. Blue Sky". On following albums, the band made sporadic use of it, notably on their hits "The Diary of Horace Wimp" and "Confusion" from their 1979 album Discovery, the tracks "Prologue", "Yours Truly, 2095", and "Epilogue" on their 1981 album Time, and "Calling America" from their 1986 album Balance of Power. In the late 1970s, French duo Space Art used a vocoder during the recording of their second album, Trip in the Centre Head. Phil Collins used a vocoder to provide a vocal effect for his 1981 international hit single "In the Air Tonight". Vocoders have appeared on pop recordings from time to time, most often simply as a special effect rather than a featured aspect of the work. However, many experimental electronic artists of the new-age music genre often utilize vocoder in a more comprehensive manner in specific works, such as Jean-Michel Jarre (on Zoolook, 1984) and Mike Oldfield (on QE2, 1980 and Five Miles Out, 1982). Vocoder module and use by Mike Oldfield can be clearly seen on his Live At Montreux 1981 DVD (Track "Sheba"). There are also some artists who have made vocoders an essential part of their music, overall or during an extended phase. Examples include the German synthpop group Kraftwerk, the Japanese new wave group Polysics, Stevie Wonder ("Send One Your Love", "A Seed's a Star") and jazz/fusion keyboardist Herbie Hancock during his late 1970s period. In 1982 Neil Young used a Sennheiser Vocoder VSM201 on six of the nine tracks on Trans. The chorus and bridge of Michael Jackson's "P.Y.T. (Pretty Young Thing)". features a vocoder ("Pretty young thing/You make me sing"), courtesy of session musician Michael Boddicker. Coldplay have used a vocoder in some of their songs. For example, in "Major Minus" and "Hurts Like Heaven", both from the album Mylo Xyloto (2011), Chris Martin's vocals are mostly vocoder-processed. "Midnight", from Ghost Stories (2014), also features Martin singing through a vocoder. The hidden track "X Marks the Spot" from A Head Full of Dreams was also recorded through a vocoder. Noisecore band Atari Teenage Riot have used vocoders in variety of their songs and live performances such as Live at the Brixton Academy (2002) alongside other digital audio technology both old and new. The Red Hot Chili Peppers song "By the Way" uses a vocoder effect on Anthony Kiedis' vocals. Among the most consistent users of the vocoder in emulating the human voice are Daft Punk, who have used this instrument from their first album Homework (1997) to their latest work Random Access Memories (2013) and consider the convergence of technological and human voice "the identity of their musical project". For instance, the lyrics of "Around the World" (1997) are integrally vocoder-processed, "Get Lucky" (2013) features a mix of natural and processed human voices, and "Instant Crush" (2013) features Julian Casablancas singing into a vocoder. Ye (Kanye West) used a vocoder on the outro of his song "Runaway" (2010). Producer Zedd, American country singer Maren Morris and American musical duo Grey made a song titled "The Middle" which featured a vocoder and reached the top ten of the charts in 2018. Voice effects in other arts Robot voices became a recurring element in popular music during the 20th century. Apart from vocoders, several other methods of producing variations on this effect include: the Sonovox, Talk box, Auto-Tune, linear prediction vocoders, speech synthesis, ring modulation and comb filter. Vocoders are used in television production, filmmaking and games, usually for robots or talking computers. The robot voices of the Cylons in Battlestar Galactica were created with an EMS Vocoder 2000. The 1980 version of the Doctor Who theme, as arranged and recorded by Peter Howell, has a section of the main melody generated by a Roland SVC-350 vocoder. A similar Roland VP-330 vocoder was used to create the voice of Soundwave, a character from the Transformers series. See also Audio time stretching and pitch scaling List of vocoders Silent speech interface Notes References Multimedia references External links Description, photographs, and diagram for the vocoder at 120years.net O'Reilly Article on Vocoders Object of Interest: The Vocoder The New Yorker Magazine mini documentary Audio effects Electronic musical instruments Music hardware Lossy compression algorithms Speech codecs Robotics engineering
Vocoder
Technology,Engineering
3,963
60,586,206
https://en.wikipedia.org/wiki/Plan%20Voisin
The Plan Voisin was a planned redevelopment of Paris designed by French-Swiss architect Le Corbusier in 1925. The redevelopment was planned to replace a large area of central Paris, on the Right Bank of the River Seine. Although it was never implemented, the project is one of Le Corbusier's most well known; its principles inspired a number of other plans around the world. Background Ville Contemporaine In 1922, Le Corbusier presented Ville Contemporaine at Salon d'Automne; the plan was a utopian urban concept intended to house three million inhabitants in a series of skyscrapers. Following the exhibition, Le Corbusier continued work on the project, developing the plan from a non site-specific concept to a concrete proposal. This proposal was sponsored by his friend, the avant garde aircraft and automobile builder Gabriel Voisin, whose cutting-edge design aesthetic was admired by Le Corbusier. Motivation Le Corbusier's motivation to develop the Plan Voisin was founded in frustrations with the urban design of Paris. While upper class citizens of many urban areas relocated to suburbs, the bourgeois residents of late 19th century Paris largely remained in the city center. Pushed out by rising land prices, poorer Parisians left for shanty towns on the city's outskirts. Economic segregation was exacerbated by Georges Haussmann's renovation of the city which separated affluent and poor neighborhoods with wide avenues. Within Paris' poorer neighborhoods, severe disease – worsened by poor sanitation – was rampant. Tuberculosis, in particular, was highly concentrated within the city's slums. Characteristics The Plan Voisin consisted of 18 identical skyscrapers, which were spread out evenly over an open plain of roads and parks. These skyscrapers would have adhered to the Le Corbusian model of the unité d'habitation, a comprehensive living and working space, and an early inspiration for brutalism. The development could accommodate 78,000 residents over an area of 260 hectares. In stark contrast to the dense urban area that the plan intended to replace, only 12% of the area of Plan Voisin was to be built-up. Of the built-up area, 49% was partitioned for residential use, while the other 51% accounted for all other uses of the space. Roughly a third of the open area was reserved for vehicle use, while the rest was pedestrian-only. Le Corbusier developed his proposal for Plan Voisin in this way in explicit contrast to dense urban areas such as Downtown New York City, which he described as a "nightmare". The proposal called for wider roads to accommodate for automobile traffic, and to lessen the burden that horse-drawn carriages had on automobiles. These roads would be paired with tree-lined pedestrian walkways, which would be surrounded by the skyscrapers in the open air above the tree line. These walkways would lead gradually to the buildings, which contained ground-floor cafés, shops, and offices. The residential spaces in the above floors were described as "dormitories". Rejection and legacy Rejection Ultimately, Plan Voisin was rejected by the city of Paris, as it was seen to be too radical. While it is unclear if the general public supported the plan, Le Corbusier did promote his ideas through manifestos and periodicals, which were widely read by industrialists and the avant-garde of the time. Additionally, Le Corbusier would showcase his plans at international expositions, spreading the influence of the plan's principles around the world. Legacy The Plan Voisin was the first of Le Corbusier's proposals, and its principles were paramount in the spread of modernist urbanism around the world. Particularly, the openness and relative sparseness of built-up area proposed in the plan and the use of residential towers were practices that were replicated in many places. La Cité de la Muette was built in Drancy – a suburb of Paris – closely mimicking the design techniques of Plan Voisin. The complex was used as a concentration camp from 1941 to 1944, from where over 67,000 Jews were sent directly to Auschwitz. Additionally, the La Défense business district of Paris drew inspiration from Plan Voisin, with its concrete slab foundation a notably similar feature to Le Corbusier's plan. These plans arose in the context of the post-war construction boom in Europe, lasting roughly between 1945 and 1980. During this period, urban development was rapidly spurred on by rural-to-urban migration and immigration from former colonies. The simplicity and high capacity of modernist residential towers made them suitable for this rapid development, and are commonplace in many Parisian suburbs as a result. These principles were summarized in the Athens Charter of 1947, which acted as a treatise for functional, modernist urban planning. Internationally, many plans were influenced by Plan Voisin and Athens Charter. The plan had significant influence in the purpose-built Brazilian capital of Brasília as well as the Lekkumerend housing in Leeuwarden, Netherlands, which drew inspiration from the principles of the Athens Charter. By the 1990s, the Lekkumerend quarter was a byword for criminality and poverty, and most of the original Corbusier-inspired buildings have been demolished in an effort to improve living conditions. The unpopular name Lekkumerend was changed to 'Vrijheidswijk'. References Unbuilt buildings and structures in France Le Corbusier buildings in France Architecture related to utopias
Plan Voisin
Engineering
1,135