text
stringlengths
11
320k
source
stringlengths
26
161
A New Animal Drug Application is an American legal terminology, defined in 21 CFR ¶514, after the definition in ¶510 of the term New Animal Drug . It is utilized by the FDA . A new animal drug is defined, in part, as any drug intended for use in animals other than man, including any drug intended for use in animal feed but not including the animal feed, the composition of which is such that the drug is not generally recognized as safe and effective for the use under the conditions prescribed, recommended, or suggested in the labeling of the drug. [ 1 ] It was mandated by the Federal Food, Drug, and Cosmetic Act , [ 1 ] as modified by Food and Drug Administration Amendments Act of 2007 on 27 September 2007, and is the analogue of the New Drug Application for humans. [ citation needed ] There are three different types of new animal drug applications: [ 1 ] This United States government–related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/New_Animal_Drug_Application
New Biological Nomenclature ( N.B.N. ) is a system for naming the species and other taxa of animals, plants etc. in a way that differs from the traditional nomenclatures of the past, [ 1 ] as defined by its founder Wim De Smet , a Flemish zoologist . This project arose and developed between 1970 and 2005 (approximately), which coincided with the existence of a supporting organization, the Association for the Introduction of New Biological Nomenclature (AINBN). [ 2 ] The system rests on 57 [ 3 ] plainly formulated rules and uses terms from the language Esperanto , sometimes with the addition of neologisms with an Esperanto structure. However, N.B.N. does not simply involve translation into Esperanto of names of animals and plants. It is an entirely new [ 4 ] scientific system, intended to give rational names to all taxa of the biota . After 1994 a formula consisting of letters and numerals was added to each N.B.N. name. This improves the possibilities for efficient (computer) use of the system. Even though the project has stalled for several years, 3000 N.B.N. names, which are approved by the AINBN, are available and it would be possible to continue the project. The sample contains many of the higher taxa of zoology . So far, only a small effort has been made for botany . This is due to the fact that most of the participants were zoologists. For comparison, in the famous Systema naturae of 1758, only 4238 species of animals were described. According to the initiator , the following advantages are linked to N.B.N.: [ 5 ] N.B.N. keeps features that are considered to be excellent, for example: N.B.N. names consist of one, two or three words: [ 6 ] (a) for indication of species two words are used; the first refers to a higher taxon; the second to the species itself; (b) when a name consists of three words then it relates to a subspecies ; names of one word indicate taxa of levels higher than species. Nothing is changed as to the traditionally admitted taxonomical structure (with the exception of the omission of the genus). N.B.N. assumes that the purpose of nomenclature is to give suitable and correct names to taxa, and not to establish phylogenetic links. [ 7 ] "It was felt useful to classify the existing N.B.N.-names logically, independent of the phylogenetic classification of present biology, but in conformity with Linnaeus' original method, which generally began with the best known taxa and passed on the lesser known ones.". [ 8 ] All arrangements are the same for plants and for animals, and in the future for other Kingdoms too. The first word of the name of a species always ends with "o" (in Esperanto a substantive ). The second word always ends with "a" (in Esperanto an adjective ). This first word is always the same for all species that belong to a family. The name of the N.B.N family is obtained by adding "j" (which is a plural in Esperanto) to the named substantive. For subspecies a third adjective is added. Where possible, the adjectives that define species and subspecies should render an informative, correct and exclusive characteristic. If not, it should be clear from the given name. There are five possibilities [ 9 ] for the adjective that defines the species: (a) a word ending with "tipa" (typical for) for only one species of the family; (b) a word that expresses a characteristic of the species; (c) a word ending with "noma" (with name...), which refers to the scientific name; (d) an expression of the geographical range of the species, so far as the area is unique and possible to correctly describe; (e) a word without sense, ending with "ea" and originating from a definite list. This last possibility is only applied when no other solution is found. N.B.N. names consisting of one word have fixed endings corresponding to the taxa. For kingdoms (regno), phyla (filumo), classes (klaso), orders (ordo) and suborders (subordo), families (familio) the endings are respectively "-regnanoj", "-filumanoj", "-klasanoj", "-ordanoj" and simply "-oj". The insertion "an" means "member of" and derives from the Esperanto ano , member. In N.B.N. the name of the genus does not appear. All members of a family have a same " family name ". This, properly speaking, means a return to the situation from the time of Linnaeus , where the genus played a role that is currently filled by the family designation. Family is a taxon that was inserted in the 19th century. The possible loss of information is counterbalanced by the adding of a numerical code that is linked to each species name and easily found with help of the today's usual technical means. The elimination of the genus name means an enormous saving in the number of nomenclature terms since nowadays more than 300,000 [ 7 ] genera exist, only considering the animal kingdom. Every taxon from the family is linked to a type species that determines the name of this taxon, together with the ending mentioned above. The perhaps most characteristic quality of N.B.N. consists in the central place of the order and in the use of a key word that is found in each taxon that belongs to a definite order. For example, in the order of *Fokordanoj* ( Pinnipedia ) there is the key word "foko", which is present in all names of all families and all species of the order. In contrast with traditional nomenclature there is no use in N.B.N. names of the names of authors, nor of the year of publication. The Principle of Priority does not exist with N.B.N. The N.B.N. names and rules are not unchangeable, but adaptations on personal initiative are not admitted; suggestions for changes have to be thoroughly motivated and submitted to the organizing body. If necessary adaptations occur, these are to be published without delay. [ 10 ]
https://en.wikipedia.org/wiki/New_Biological_Nomenclature
The New Catalogue of Suspected Variable Stars ( NSV ) is a star catalogue containing 14,811 stars which, although suspected to be variable , were not given variable star designations prior to 1980. It was published in 1982. [ 1 ] This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/New_Catalogue_of_Suspected_Variable_Stars
New College is a planned new college of the University of St Andrews first announced in 2022. It will be located on the former site of Madras College , centred on the Grade A-listed main building. [ 1 ] [ 2 ] It is expected to cost £140 million pounds, and will contain the School of International Relations and the new Business School. [ 3 ] WilkinsonEyre have been appointed to design it. [ 4 ] [ 5 ] Over 700 new student bedrooms are planned to be built as part of the development. [ 6 ] [ 7 ] This article about a university or other higher education institution is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/New_College,_St_Andrews
The Food and Drug Administration 's (FDA) New Drug Application ( NDA ) is the vehicle in the United States through which drug sponsors formally propose that the FDA approve a new pharmaceutical for sale and marketing. [ 1 ] [ 2 ] Some 30% or less of initial drug candidates proceed through the entire multi-year process of drug development , concluding with an approved NDA, if successful. [ 1 ] The goals of the NDA are to provide enough information to permit FDA reviewers to establish the complete history of the candidate drug. [ 3 ] Among facts needed for the application are: [ 2 ] Exceptions to this process include voter driven initiatives for medical marijuana [ 4 ] in certain states. To legally test the drug on human subjects in the United States, the maker must first obtain an Investigational New Drug (IND) designation from FDA. [ 5 ] This application is based on nonclinical data, typically from a combination of in vivo and in vitro laboratory safety studies, that shows the drug is safe enough to test in humans. [ 5 ] Often the "new" drugs that are submitted for approval include new molecular entities [ 6 ] or old medications that have been chemically modified to elicit differential pharmacological effects or reduced side effects . Since the 1962 Kefauver–Harris Amendment , new drugs are statutorily required to demonstrate both safety and effectiveness through substantial evidence for approval. The amendment defines substantial evidence as "evidence consisting of adequate and well-controlled investigations, including clinical investigations , by experts qualified by scientific training and experience to evaluate the effectiveness of the drug involved, on the basis of which it could fairly and responsibly be concluded by such experts that the drug will have the effect it purports or is represented to have under the conditions of use prescribed, recommended, or suggested in the labeling or proposed labeling thereof." [ 7 ] [ 8 ] This standard lies at the heart of the regulatory program for drugs. Data for the submission must include those from one or more rigorous clinical trials. [ 5 ] Due to the plural "adequate and well-controlled investigations" in the statute, FDA has interpreted the substantial evidence requirement as requiring at least two adequate and well-controlled clinical trials, each convincing on its own. However, in 1997, Congress passed an amendment, expressly granting FDA authority to consider other types of confirmatory evidence along with one adequate and well-controlled clinical investigation for approval. [ 9 ] The trials are typically conducted in three phases: [ 5 ] The legal requirements for safety and effectiveness have been interpreted as requiring scientific evidence that the benefits of a drug outweigh the risks and that adequate instructions exist for use, since many drugs have adverse side effects . The results of the testing program are codified in an FDA-approved public document that is called the product label, package insert or Full Prescribing Information. [ 10 ] The prescribing information is widely available on the web from the FDA, [ 11 ] drug manufacturers, and frequently inserted into drug packages. The main purpose of a drug label is to provide healthcare providers and consumers with adequate information and directions for the safe use of the drug. The documentation required in an NDA is supposed to tell "the drug’s whole story, including what happened during the clinical tests, what the ingredients of the drug are, the results of the animal studies, how the drug behaves in the body, and how it is manufactured, processed and packaged.” [ 2 ] Once approval of an NDA is obtained, the new drug can be legally marketed starting that day in the United States. Once the application is submitted, the FDA has 60 days to conduct a preliminary review, which assesses whether the NDA is "sufficiently complete to permit a substantive review." If the FDA finds the NDA insufficiently complete, then the FDA rejects the application by sending the applicant a Refuse to File letter, which explains where the application failed to meet requirements. [ 12 ] Where the application cannot be granted for substantive reasons, the FDA issues a Complete Response Letter . Assuming the FDA finds the NDA acceptable, a 74-day letter is published. [ 13 ] A standard review implies an FDA decision within about 10 months while a priority review should complete within 6 months. [ 14 ] Biologics , such as vaccines and many recombinant proteins used in medical treatments are generally approved by FDA via a Biologic License Application (BLA), rather than an NDA. The manufacture of biologics is considered to differ fundamentally from that of less complex chemicals, requiring a somewhat different approval process. Generic drugs that have already been approved via an NDA submitted by another maker are approved via an Abbreviated New Drug Application (ANDA), which does not require all of the clinical trials normally required for a new drug in an NDA. [ 15 ] Most biological drugs, including a majority of recombinant proteins are considered ineligible for an ANDA under current US law. [ 16 ] However, a handful of biologic medicines, including biosynthetic insulin , growth hormone , glucagon , calcitonin , and hyaluronidase are grandfathered under governance of the Federal Food Drug and Cosmetics Act, because these products were already approved when legislation to regulate biotechnology medicines later passed as part of the Public Health Services Act. Medications intended for use in animals are submitted to a different center within FDA, the Center for Veterinary Medicine (CVM) in a New Animal Drug Application (NADA). These are also specifically evaluated for their use in food animals and their possible effect on the food from animals treated with the drug.
https://en.wikipedia.org/wiki/New_Drug_Application
The New Engineering Contract ( NEC ), or NEC Engineering and Construction Contract , is a formalised system created by the UK Institution of Civil Engineers that guides the drafting of documents on civil engineering , construction and maintenance projects for the purpose of obtaining tenders , awarding and administering contracts . [ 1 ] [ 2 ] NEC has become the default suite of contracts for public-sector works, services and supplies in the United Kingdom and Hong Kong. NEC contracts have also been successfully used in Australia, Ireland, the Netherlands, New Zealand, Peru, the Philippines, South Africa, UAE, and many more. They are also increasingly being used in the private sector. There have been four editions, the first in 1993, the second in 1995, the third in 2005 and the most recent in 2017. [ 3 ] The NEC3 was launched in 2005 and it was amended in April 2013. The NEC Users' Group, with over 400 members worldwide, brings together organisations and individual users of the NEC contract suite to exchange knowledge and best practice. [ 4 ] Originally contracts in the civil engineering and construction industries were bespoke and drafted by Chancery pleaders using their knowledge of leases rather than building processes. In 1879, Royal Institute of British Architects for construction projects created RIBA forms which lead to the Joint Contracts Tribunal , JCT forms. For civil engineering the need for a formalized approach to contracts led the Institution of Civil Engineers (ICE) to produce a formalised set of conditions of contract. In 1986, the ICE commissioned the development of a new form of contract as it was felt that there was a need for a form that had clearer language, clearer allocation of responsibilities and reduced opportunities for contractual “gamesmanship”. In 1991, this resulted in a consultative form of the New Engineering Contract form of contract. The first edition was published in 1993. [ 5 ] Wider use of the NEC was recommended by the Latham Report in 1994. [ citation needed ] NEC's history started in 1986 when Martin Barnes was commissioned to start drafting a contract to stimulate good project management. The first edition of NEC was launched in 1993. NEC2 arrived two years later, in 1995. NEC2 was used to build the High Speed 1 railway, between London and the Channel Tunnel . [ citation needed ] NEC is a division of Thomas Telford Ltd, the commercial arm of the ICE. [ 6 ] The NEC3 suite was launched in 2005 and it was fully revised in 2013 - NEC's 20th anniversary. This suite was used in projects such as Crossrail, London 2012 Olympic and Paralympic games, [ 7 ] Halley VI in Antarctica, and the Tin Shui Wai Hospital in Hong Kong. [ citation needed ] NEC3 was endorsed by the Construction Clients' Board (formerly Public Sector Construction Clients' Forum), Crown Commercial Services , the Facilities Management Board of the UK Cabinet Office, the South African Construction Industry Development Board, the International Organization for Standardization (ISO), the Association for Project Management (APM) and the British Institute of Facilities Management (BIFM). [ 8 ] The Hong Kong government decided to use NEC3 contracts generally for all government projects tendered in 2015/16. After a series of successful NEC3 projects in the region, the Hong Kong government announced in November 2016 that the NEC contract suite would be used for all future public works projects as far as practicable. [ 9 ] Since 2017, Hong Kong has progressively moved to adopt NEC4. [ 10 ] NEC4 was announced in March 2017 and has been available since June 2017. This new edition reflects procurement and project management developments and emerging best practice, with improvements in flexibility, clarity and the ease of administration. It also introduced two new contracts: the NEC4 Design, Build and Operate Contract (DBO) and the NEC4 Alliance Contract (ALC). [ 11 ] An NEC4 contract suite covering facilities management was released in 2021. [ 12 ] Several changes to terminology were introduced in NEC4, for example: One former NEC3 clause which dealt with the "spirit" of the contract was divided into two clauses, to show that both aspects should be complied with: The NEC is a family of standard contracts, each of which stimulate good management of the relationship between the two parties to the contract and, hence, of the work included in the contract, can be used in a wide variety of commercial situations, for a wide variety of types of work and in any location, and are clear and simple documents using language and a structure which are straightforward and easily understood. The contracts legally define the responsibilities and duties of Employers (the party which commissions the work) and Contractors (the party which carries out the work) in the works information. The contract consists of two key parts, divided between contract data provided by the Employer and that provided by the Contractor. The NEC3 complies fully with the Achieving Excellence in Construction (AEC) principles. The Efficiency and Reform Group of the UK Cabinet Office recommends the use of NEC contracts by public sector construction procurers on construction projects. [ citation needed ] NEC documents use some of their own terminology in a specific manner, for example "compensation event" is a term used to encompass variations, losses and expenses, and extensions of time . The term implies that in some situations the contractor will be "compensated" with additional payment, e.g. for additional expenses incurred as a result of client actions, but financial compensation may not always arise and in some cases variations will result in a cost saving. [ 15 ] The NEC contracts now form a suite of contracts, with NEC being the brand name for the "family" of contracts. [ 16 ] When it was first launched in 1993, it was simply the "New Engineering Contract". This specific contract has been renamed the "Engineering and Construction Contract" which is the main contract used for any construction based project. It now sits alongside a number of other contracts, making the NEC suite suitable for use in many stages of the lifecycle of a project and for any party within a project. The contracts available within the suite are: Engineering and Construction Contract (ECC) : Suitable for any construction based contract between an Employer and a Contractor. It is intended to be suitable for any sector of the industry, including civil, building, nuclear, oil and gas, etc. Within the ECC contract there are six family level options, from which the Employer is to choose the most suitable and offer the best option/value for money on that project: These options offer a framework for tender and contract clauses that differ primarily in regard to the mechanisms by which the contractor is paid and how risk is allocated and motivated to control costs. The core clauses (of the main option listed above) are used in conjunction with the secondary options and the additional conditions of contract. The clauses of these options have been be adapted for simpler less risky work (short contracts), for use as subcontracts, and for professional services such as design as below. The Engineering and Construction Subcontract Contract (ECS) Very similar in detail and complexity of contractual requirements to the ECC contract above, but allows the contractor to sub-let the project to a subcontractor imposing most of the clauses that she/he has within her/his headline contract. There is very little difference between the ECC and the ECS, other than the names of the parties are changed (contractor and subcontractor) and some of the timescales for contractual responses are altered to take into account the timescales required in the ECC contract. The Engineering and Construction Short Contract (ECSC) This is an abbreviated version of the ECC contract and most suitable when the contract is considered "low risk" (not necessarily low value) on a project with little change expected. This contract is still between the employer and contractor but does not use all of the processes of the ECC making it simpler and easier to manage and administer. The Engineering and Construction Short Subcontract (ECSS) Allows the contractor to sub-let a simpler lower risk contract down the line to a subcontractor. It is back-to-back with the ECSC but is frequently used as subcontract when the main contract is under the ECS. The Professional Services Contract (PSC) This contract is for anyone providing a service, rather than undertaking any physical construction works. Designers are the most obvious party to fit into this category. Whilst they are producing a design for an employer or contractor, they would sign up and follow the clauses within the PSC. Most of the clauses within this contract are the same or similar to those in the main ECC contract, so that all contractors, designers and subcontractors have broadly the same obligations and processes to follow as each other. The PSC can be used in a wide variety of situations with relatively little change required. [ 17 ] As with the ECC contract, there are several main options: Options B, D and F do not exist. [ 18 ] The Professional Services Short Contract (PSSC) This was added to the family in April 2013 and was co-developed with the Association for Project Management . It is for simpler less complex assignments than the PSC, such as the appointment of small team for managing an ECC contract on the Employer's behalf. E.g. the Project Manager and Supervisor. It is frequently used as a subcontract to the PSC for design work. Framework Contract (FC) Parties enter into a "framework" of which work packages will then be let during the life of that framework. Any individual projects will then be awarded using one of the other contracts within the suite, meaning that the parties follow the headline clauses within the framework contract (which is a fairly slim contract) and then the individual clauses within the chosen contract for that package. Different work packages can be let using different contracts during the life of the framework. Term Service Contract (TSC) For parties on a project that is operational or maintenance based, e.g. maintaining highway signage, where the contract is to ensure that a certain standard is maintained. This contract is not generally used for constructing new works, but can include some amount of betterment. There is also a "Term Service Short Contract" where the project is a relatively low risk project and/or the work is primarily re-active. It is an abbreviated version of the main TSC. Supply Contract/Short Supply Contract (SC/SSC) These contracts were launched in 2010. This is for a supplier of supplies or goods to a project, and puts extra contractual requirements on them during their procurement/manufacture period. The Supply Contract is for big bespoke items i.e. designed and manufactured specifically for that contract, with the Short Supply Contract potentially being for more run of the mill / commoditised items on a project. Neither of these contracts cover working on a site as they are not written as 'supply and install' contracts. Dispute Resolution Services Contract (DRSC) - previously Adjudicator's Contract If there is a dispute between the parties on a project, the Adjudicator will follow the clauses within this contract in order to come to a decision. Design Build and Operate (DBO) The NEC4 Design, Build and Operate Contract (DBO) allows the procurement of a more integrated whole-life delivery solution. It combines responsibility for design, construction, operation and/or maintenance, procured from a single supplier. It can include a range of different services to be provided before, during and after engineering and construction works are completed. Alliance Contract (ALC) The NEC4 Alliance Contract (ALC), published initially in a consultative format, was created to support clients who wish to take a step forward by fully integrating the delivery team for large complex projects. The ALC should be used to engage in a single collaborative contract with a number of participants in order to deliver a project or programme of work. The basis of the contract will be that all parties work together in achieving client objectives, and share in the risks and benefits of doing so. Facilities Management contract suite The NEC4 FMC suite includes the Facilities Management contract (FMC), subcontract (FMS), short contract (FMSC) and short subcontract (FMSS). [ 12 ] Guidance Notes and Flowcharts Each of the different contracts listed above comes with its own set of guidance notes and flowcharts which should aid understanding of the intent of the drafted clauses. The guidance notes expand on each clause to give extra substance and intent of the original drafters as to how a clause should be understood and interpreted. The flowcharts then map out each of the main processes within each contract and demonstrate how it should operate and what to do next if a party has or has not carried out the next contractual action. The following demonstrates the differing approaches to drafting in the NEC and ICE forms of contract using the illustration of circumstances when the contractor is entitled to additional time and cost for physical conditions: The Contractor encounters physical conditions which are within the site, are not weather conditions and which an experienced contractor would have judged at the contract date to have such a small chance of occurring that it would have been unreasonable for her/him to have allowed for them. If during the execution of the Works the Contractor shall encounter physical conditions (other than weather conditions or conditions due to weather conditions) or artificial obstructions which conditions or obstructions could not in her/his opinion reasonably have been foreseen by an experienced contractor the Contractor shall as early as practicable give written notice thereof to the Engineer. Employers often use additional conditions of contract ("Z clauses") to amend or delete contract provisions relating to certain obligations, and the Efficiency and Reform Group of the Cabinet Office in the UK (formerly the OGC) has published generic public sector Z clauses for use with NEC contracts. [ 19 ] A standard Z clause relating to fair payment for sub-contractors (often labelled "Z5") was recommended for public sector use in 2011, [ 20 ] and additional public sector Z clauses were later published to reflect the contract termination provisions and other requirements of the Public Contracts Regulations 2015. [ 21 ] Excessive use of Z clauses has been criticised as "onerous" and "poorly drafted"; NEC guidance states that "additional conditions should be used only when absolutely necessary to accommodate special needs". [ 22 ] Guidance notes and flow charts are published by the ICE, which are supplemented by the frequently asked questions sections of the NEC website. [ 23 ] Prospective users of the NEC3 contract are encouraged to study the FAQ's in order to avoid unintended contract provisions. The often unintended Option C scenario where a Contractor is paid monies in excess of the Target Cost plus maximum share provisions is specifically not addressed in the guidance notes or frequently asked questions. Other common misinterpretations are minutes of meetings as communications , deleted work and paying for correcting defects .
https://en.wikipedia.org/wiki/New_Engineering_Contract
New England Biolabs ( NEB ) is an American life sciences company which produces and supplies recombinant and native enzyme reagents for life science research. [ 2 ] It also provides products and services supporting genome editing , synthetic biology and next-generation sequencing . [ 3 ] NEB also provides free access to research tools such as REBASE , InBASE, and Polbase . The company was founded in 1974 by Donald "Don" Comb, a Harvard Medical School professor, as a cooperative laboratory of experienced scientists and initially produced restriction enzymes on a commercial scale. [ 4 ] Comb held the CEO title until 2005 when, at 78 years old, he moved from management back into research at the firm. [ 5 ] NEB received approximately $1.7 million in Small Business Innovation Research (SBIR) grants between 2009 and 2013 for this research. [ 2 ] NEB produces 230 recombinant and 30 native restriction enzymes for genomic research, as well as nicking enzymes and DNA methylases. It pursues research in areas related to proteomics , DNA Sequencing , and drug discovery . NEB scientists also conduct basic research in Molecular Biology and Parasitology . [ 2 ] The company has subsidiaries in Singapore , Canada , China , France , Germany , Japan , the U.K., and Australia, [ 2 ] [ 6 ] and distributors in South America , Australia , and other countries in Europe and Asia . [ 7 ] Its headquarters are in Ipswich, MA. Development of the current headquarters began in 2000, and was completed in 2005. [ 8 ] Donald Comb served as the company's Chairman and CEO from the company's founding in 1974, until 2005. In 2005, he was replaced as chief executive by James Ellard , though Comb continued to serve as Chairman of the Board of Directors. In October 2020 Comb passed away at the age of 93. [ 9 ] NEB employs over 450 people at its headquarters. [ 10 ] As company policy, all scientists and some executives must work at least one day per month on the customer support telephone line, answering technical support questions about the company's products. [ 2 ] In 2022 Jim Ellard stepped down as CEO, but remained chairman of the board of directors, he was succeeded by Salvatore (Sal) Russello, previously NEB's director of OEM & customized solutions. [ 11 ] Sir Richard John Roberts is the company's Chief Scientific Officer. [ 2 ] [ 12 ] He shared the 1993 Nobel Prize in Physiology or Medicine with Phillip Allen Sharp for the discovery of introns in eukaryotic DNA and the mechanism of gene-splicing. In 2015, NEB committed to establishing a GMP manufacturing facility near its headquarters in Ipswich, Massachusetts, [ 13 ] and the 40,000-sq-ft facility was completed in 2018. [ 14 ] The multi-product Rowley Cleanroom Manufacturing Facility makes GMP-grade products and has a 10,000-sq-ft mechanical mezzanine. [ 15 ] In January 2017, NEB released Luna universal quantitative real-time polymerase chain reaction ( qPCR ) and reverse-transcription quantitative polymerase chain reaction ( RT-qPCR ) kits. [ 16 ] The Luna kits are used for DNA or RNA quantitation. [ 16 ] In December 2017, the company released the NEBNext Ultra II FS DNA library prep kit for next-generation sequencing (NGS). [ 17 ] In October 2019, NEB released a new RNA depletion product, the NEBNext Globin & rRNA Depletion Kit (Human/Mouse/Rat) and NEBNext rRNA Depletion Kit (Bacteria). [ 18 ] The kits offer specific depletion of the RNA species that interfere with the analysis of coding and non-coding RNAs. [ 18 ] [ 19 ] That same month, the company announced its NEBNext Direct Genotyping Solution. [ 20 ] The product delivers a one day, automatable genotyping workflow for a variety of applications in Agricultural biotechnology . [ 20 ] In January 2020, NEB signed an agreement with ERS Genomics Limited that gave NEB rights to sell CRISPR/Cas9 tools and reagents, used for gene editing. [ 21 ] The NEBuilder HiFi DNA Assembly Cloning Kit and Master Mix enable one-step cloning and multiple DNA fragment assembly. The proprietary DNA polymerase in the NEBuilder HiFi enzyme mix can assemble DNA fragments ranging from 100 bp to 19 kb. NEB also offers the Gibson Assembly Master Mix. [ 22 ] NEB provides purification kits for both DNA and RNA. [ 23 ] [ 24 ] In May 2019, NEB released the Monarch Genomic DNA Purification Kit which is designed to minimize RNA contamination and allow high-yield purification of large DNA fragments. [ 23 ] NEB’s nucleic acid purification products have been used in various studies, including: New England Biolabs developed a colorimetric loop-mediated isothermal amplification (LAMP) assay for research use. [ 29 ] [ 30 ] This assay can be used to test for the presence of virus through nucleic acid detection, returning results in only 30 minutes. [ 29 ] In 2020, the LAMP method was one of several molecular tests used to detect RNA from SARS-CoV-2 , a strain of coronavirus that causes COVID-19 . [ 31 ] RNA isolation kits were also used to develop assays to detect SARS-CoV-2. NEB’s Monarch Total RNA Miniprep Kit was not designed specifically for viral RNA extraction, but it was successfully used by different companies to extract viral RNA from biological samples. [ 32 ] NEB also released a supplementary protocol for processing saliva, buccal swabs, and nasopharyngeal samples. [ 32 ] Three next-generation sequencing kits to support SARS-CoV-2 monitoring were launched in February, 2021. These kits, based on ARTIC Network protocols, provide virus transmission and evolution insights. [ 33 ] In April, 2021, the Color SARS-CoV-2 RT-LAMP Diagnostic Assay, utilizing New England Biolabs reagents, was approved for emergency use at Color Health Inc in Burlingame, California. [ 34 ] The company runs free scientific databases. REBASE , the restriction enzyme database, contains the details of commercial and research endonucleases. [ 35 ] In 2011 the company founded Polbase, an online database which provides information specifically about polymerases. [ 36 ] [ 37 ] Another free NEB database is InBase, an intein database, which includes the Intein Registry and information about each intein. [ 35 ] In 2001, NEB co-founded the marine DNA library Ocean Genome Legacy (OGL), which according to the Boston Globe , “catalogues samples of organisms from all over the world, to be made available to scientists for research”. Though originally located on the NEB campus, OGLF relocated to the Nahant campus of Northeastern University in 2014. [ 36 ] [ 37 ] [ 38 ] To enable point-of-use sales of its reagents, NEB created a digital interface for enzyme-housing freezers to be used at customer storage sites, through a partnership with Ionia Corp. and Salesforce.com. The data is used by the company for both sales logistics and as a part of future enzyme research development. [ 39 ] [ 40 ] It has also partnered with Harvard University on recycling and reclamation initiatives when its products and packaging come to the end of their use or lifecycle. [ 41 ] As of 2015 [update] , NEB also had a distribution agreement with VWR . [ 3 ] In June 2019, NEB, Waters, and Genos announced they would work together on The Human Glycome Project, a global initiative to map the structure and function of human glycans . [ 42 ] NEB will supply a version of its Rapid PNGase F technology to aid in increased sample preparation and improve process throughput. [ 42 ] That same month, NEB entered a partnership with Bioz, Inc., an artificial intelligence technology company, to provide its customers with access to examples of real-world applications of its products. [ 43 ]
https://en.wikipedia.org/wiki/New_England_Biolabs
The New England Biotech Association (NEBA) is a coalition of biotechnology companies, academic institutions, pharmaceutical companies, and trade organizations from all six New England States. NEBA serves as a regional policy and public affairs voice for the biotechnology and biopharmaceutical industry. [ 1 ] NEBA is a non-profit , member driven organization, with over 600 members. [ 2 ] The Chairman of NEBA is Paul Pescatello, Director of Connecticut United for Research Excellence -(CURE.) [ 3 ] In 2010, NEBA advocated against measures that would harm the biotechnology industry in Maine [ permanent dead link ] and other New England states. The organization also launched a website www.MassRxHelp.org to help consumers save money on prescription medications. [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/New_England_Biotech_Association
The New England Enzyme Center (NEEC) was created at the Tufts University School of Medicine in Boston, Massachusetts in 1964 as a federally supported biochemical resource center. [ 1 ] According to Doogab Yi, by the late 1970s NEEC had been transformed into "several commercial biotech companies." [ 1 ] Roscoe O. Brady and his colleagues at National Institutes of Health (NIH) were almost ready for a clinical trial for an enzyme replacement therapy for Gaucher's disease that they had been working on for over a decade. [ 2 ] : 44 They could not purify the enzyme in large enough quantities. [ 2 ] : 44 "They contracted with Henry Blair, who had worked at NIH with Brady but left to form the New England Enzyme Center at Tufts University Medical School in Boston. Supported solely by contracts from Brady's lab, Blair set up a lab for large-scale purification and began collecting fresh placentas. In 1981, with NIH getting ready to move into larger clinical trials and biotechnology fever exploding around him, Blair privatized his venture. He launched Genzyme, with the NIH as its major source of revenue." Blair had started his career in the biotechnology industry working as a technician at Tufts medical school. [ 3 ] [ 4 ] In 1978 Henry E. Blair, from the NEEC and a team of researchers including Peter G. Pentchev, Roscoe O. Brady , Daniel E. Britton and Susan H. Sorrell from the National Institutes of Health co-authored a paper in the PNAS isolating and comparing enzymes in search of a treatment for Gaucher disease . [ 5 ] In 1981 venture capitalist Sheridan Snyder , Henry Blair and George M. Whitesides created the start-up Genzyme and continued to produce the enzymes for the NIH. [ 4 ] [ 6 ] Genzyme's first office was an old clothing warehouse adjacent to Tufts Medical School. [ 3 ]
https://en.wikipedia.org/wiki/New_England_Enzyme_Center
New Enterprise Associates ( NEA ) is an American-based venture capital firm. NEA focuses investment stages ranging from seed stage through growth stage across an array of industry sectors. With over $25 billion in committed capital, NEA is one of the world's largest venture capital firms. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] NEA was founded in 1977 by C. Richard (Dick) Kramlich , Chuck Newhall and Frank Bonsal. [ 7 ] Kramlich had worked with noted venture capitalist Arthur Rock beginning in 1969 and Frank Bosnal had been an investment banker at Alex. Brown & Sons where he focused on initial public offerings (IPOs) for startup companies. [ 7 ] Chuck Newhall had previously managed an investment fund for T. Rowe Price in the 1970s. The firm was founded with offices on both the East Coast and the West Coast. Among the firm's first investments was 3Com , which NEA backed along with Mayfield Fund and Jack Melchor in 1981. [ 8 ] The first NEA investment fund had only $16 million of capital. The firm's second fund raised $45 million and the third fund collected $125 million of commitments from investors in 1984. The firm continued to grow steadily throughout the 1980s and early 1990s raising $900 million from 1987 through 1996 across NEA's next four funds. [ 7 ] Beginning with NEA-8 in 1998, the firm greatly increased the size of its investment funds. NEA's tenth fund had $2.3 billion of investor commitments in 2000. After raising a more modest $1.1 billion in 2004 for the firm's eleventh fund, NEA raised $2.3 billion and $2.5 billion for its next two funds, respectively. [ 9 ] [ 10 ] In 2010, NEA launched its thirteenth investment fund with $2.5 billion of investor capital, the largest since the 2008 financial crisis . [ 11 ] In 2012, NEA closed its fourteenth investment fund with $2.6 billion of investor capital. [ 3 ] [ 6 ] In April 2015, NEA closed its fifteenth investment fund with $3.1 billion in investor capital - the largest venture capital fund ever raised. [ 12 ] In June 2017, NEA closed its sixteenth investment fund with $3.3 billion in investor capital - again the largest venture capital fund ever raised. [ 13 ] The firm primarily functions in New York and California, and has offices in Baltimore , Bangalore , Beijing , Boston , Menlo Park , California, Mumbai , New York City , San Francisco and Shanghai . [ 14 ] [ 1 ] Since its founding, NEA invested in nearly 1,000 companies, and realized over 650 liquidity events (with over 250 portfolio company IPOs and over 300 portfolio company acquisitions). [ 3 ] In 2018, former CEO of General Electric , Jeff Immelt , joined the firm as a venture partner. [ 15 ] NEA has 370 portfolio companies. Some of the firm's investments include: Aerohive Networks , Alimera Sciences , Amicus Therapeutics , Antenna Software , Appian , AppSheet , Arris International , Automation Anywhere , Bloom Energy , Box, Inc. , Bright Health , Built Robotics , ByteDance , Champions Oncology , Clarifai , Cleo , Clio , Cloudflare , Clovis Oncology , Cohere , Conviva , CrowdMed , Coursera , Dandelion Energy , Databricks , Drop , Eargo , Edmodo , Enigma , FiscalNote , Formlabs , FTX , Gen.G , Genies, Inc. , Goji Electronics , GoodLeap , Goop , HackerOne , Houzz , IFTTT , Illumitex , Illusive Networks , Instabase , Konux , Lexicon Pharmaceuticals , Luminary , Luxtera , MasterClass , Moda Operandi , MongoDB , Pager , Patreon , Philo , Plaid , Raise.com , Regulus Therapeutics , Robinhood Markets , Rock Health , Smartcar , Splashtop , Tamara Mellon , The Yes , ThirdLove , Tintri , Uniphore , Upstart , Upwork , Virtru , Wheels Up , Zoomdata . [ 16 ] [ better source needed ]
https://en.wikipedia.org/wiki/New_Enterprise_Associates
The New Internet Computer (NIC) was a Linux -based internet appliance released July 6, 2000 by Larry Ellison and Gina Smith 's New Internet Computer Company. The system (without a monitor) sold for US$199. [ 1 ] [ 2 ] [ 3 ] The NIC boots from a CD-ROM with a custom Linux distribution developed by Wim Coekaerts. It has no hard drive and no way to install additional software. [ 1 ] The system's only non-volatile storage is 4 MB of flash memory . Ellison planned to sell 5 million units the first year, but fewer than 50,000 units were sold. The company shut its doors in June 2003. PC World ranked the NIC as the ninth worst PC of all time. [ 1 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/New_Internet_Computer
The New Worlds Mission is a proposed project comprising a large occulter flying in formation with a space telescope designed to block the light of nearby stars in order to observe their orbiting exoplanets . The observations could be taken with an existing space telescope or a dedicated visible light optical telescope optimally designed for the task of finding exoplanets. A preliminary research project was funded from 2005 [ 1 ] through 2008 by NASA Institute for Advanced Concepts (NIAC) and headed by Webster Cash of the University of Colorado at Boulder in conjunction with Ball Aerospace & Technologies Corp. , Northrop Grumman , Southwest Research Institute and others. Since 2010 the project has been looking for additional financing from NASA and other sources in the amount of roughly US$3 billion including its own four-meter telescope. [ 2 ] [ 3 ] If financed and launched, it would operate for five years. Currently, the direct detection of extrasolar planets (or exoplanets) is extremely difficult. This is primarily due to: The difficulty of observing such a dim planet so close to a bright star is the obstacle that has prevented astronomers from directly photographing exoplanets. To date, only a handful of exoplanets have been photographed. [ 4 ] The first exoplanet to be photographed, 2M1207b , is in orbit around a star called 2M1207 . Astronomers were only able to photograph this planet because it is a very unusual planet that is very far from its host star, approximately 55 astronomical units (about twice the distance of Neptune ). Furthermore, the planet is orbiting a very dim star, known as a brown dwarf . To overcome the difficulty of distinguishing more Earth-like planets in the vicinity of a bright star, the New Worlds Mission would block the star's light with an occulter . The occulter would block all of the starlight from reaching the observer, while allowing the planet's light to pass undisturbed. The starshade would be tens of meters across and probably made out of Kapton , a lightweight material similar to Mylar . [ 5 ] Traditional methods of exoplanet detection rely on indirect means of inferring the existence of orbiting bodies. These methods include: All of these methods provide convincing evidence for the existence of extrasolar planets, but none of them provide actual images of the planets. The goal of the New Worlds Mission is to block the light coming from nearby stars with an occulter. This would allow the direct observation of orbiting planets. The occulter would be a large sheet disc flown thousands of kilometers along the line of sight. The disc would likely be several tens of meters in diameter and would fit inside existing expendable launch vehicles and be deployed after launch. One difficulty with this concept is that light incoming from the target star would diffract around the disc and constructively interfere along the central axis. Thus the starlight would still be easily visible, making planet detection impossible. This concept was first famously theorized by Siméon Poisson in order to disprove the wave theory of light, as he thought the existence of a bright spot at the center of the shadow to be nonsensical. However Dominique Arago experimentally verified the existence of the spot of Arago . This effect can be negated by specifically shaping the occulter. By adding specially shaped petals to the outer edge of the disc, the spot of Arago will disappear, allowing the suppression of the star's light. This technique would make planetary detection possible for stars within approximately 10 parsecs (about 32 light years ) of Earth. It is estimated that there could be several thousand exoplanets within that distance. The starshade is similar to but should not be confused with the Aragoscope , [ 6 ] which is a proposed imaging device designed to use the diffraction of light around a perfectly-circular light-shield to produce an image. The starshade is a proposed sunflower -shaped coronagraph disc that was designed to block starlight that interferes with telescopic observations of other worlds. The "petals" of the "sunflower" shape of the starshade are designed to eliminate the diffraction that is the central feature of an Aragoscope . The starshade is a spacecraft designed by Webster Cash, an astrophysicist at the University of Colorado at Boulder 's Center for Astrophysics and Space Astronomy. [ 7 ] The proposed spacecraft was designed to work in tandem with space telescopes like the James Webb Space Telescope , which did not use it, or a new 4-meter telescope. [ 5 ] It would fly 72,000 km (45,000 mi) in front of a space telescope (between the telescope and a target star ) and approximately 238,600 miles (384,000 km) away from Earth, outside of Earth's heliocentric orbit. [ 8 ] When unfurled, the starshade resembles a sunflower , with pointed protrusions around its circumference. The starshade acts as a very large coronagraph : it blocks light of a distant star, making it easier to observe associated planets . The unfurled starshade could reduce collected light from bright stars by as much as 10 billion-fold. Light that "leaks" around the edges would be used by the telescope as it scans the target system for planets . With the reduction of the harsh light, astronomers will be able to check exoplanet atmospheres tens of trillions of miles away for the potential chemical signatures of life . [ 1 ] The New Worlds Mission aims to discover and analyze terrestrial extrasolar planets : In addition to finding and analyzing terrestrial planets, it can also discover and analyze gas giants . The New Worlds Mission will also find moons and rings orbiting extrasolar planets. This technique will involve direct imaging of planets by blocking the starlight with a starshade. It will study the moons and rings in detail and find whether moons can also support life if gas giant planets orbit in the habitable zones of parent stars. There are many possibilities for various New Worlds Missions, including
https://en.wikipedia.org/wiki/New_Worlds_Mission
The Clearing House is a banking association and payments company owned by the largest commercial banks in the United States. The Clearing House is the parent organization of The Clearing House Payments Company L.L.C. , which owns and operates core payments system infrastructure in the United States, including ACH , wire payments, check image clearing, and real-time payments [ 1 ] through the RTP network, a modern real-time payment system for the U.S. Supporting services include The Clearing House Payments Authority (a payments association with over 1,000 financial institution members and corporate subscribers) and ECCHO (an entity develops and maintains rules that govern private sector check image exchange for its members, and also engages in lobbying and education). Members of The Clearing House include JPMorgan Chase & Co. , Bank of America Corp. , Citigroup Inc. , Bank of New York Mellon Corp. , Deutsche Bank AG , U.S. Bancorp and Wells Fargo & Co. [ 2 ] The Clearing House Payments Company, an organization owned by the same banks, was established in New York in 1853 for the purpose of processing transactions among banks. [ 3 ] [ 4 ] It has offices in New York, North Carolina, Texas, and Michigan. [ 4 ] In September 2009, the Clearing House joined a lawsuit in support of the Federal Reserve after a federal court in New York ruled against the Fed. [ 2 ] Filed by Bloomberg News under the Freedom of Information Act , the lawsuit, Bloomberg L.P. v. Board of Governors of the Federal Reserve System , sought records showing where the Fed had lent $2 trillion of taxpayer funds during the bank bailout of the 2007–2008 financial crisis . The Clearing House has filed an appeal before the United States Supreme Court on October 26, 2010. [ 5 ] The case was appealed [ 6 ] but ultimately rejected on March 21, 2011. The Federal Reserve was required to release the data within five days to Bloomberg L.P. [ 7 ] [ 8 ] The Clearing House was also sued by the State of New York in Andrew Cuomo v. Clearing House Association, LLC to determine whether the U.S. Treasury 's Office of the Comptroller of the Currency (OCC) had the authority to preempt a state's right to enforce its own fair lending laws against national banks. [ 9 ] A 5–4 decision by the Supreme Court overturned previous lower court decisions that had ruled in favor of the Clearing House and the OCC. The New York Clearing House Association was a clearinghouse bank established in New York City in 1853. [ 10 ] Initially, it simplified the chaotic settlement process among the banks of New York City . The New York Clearing House functioned as a quasi-central bank : setting monetary policy, issuing a form of currency, and even storing vaults of gold to back settlements. It later served to stabilize currency fluctuations and bolster the monetary system through recurring times of panic. Since the creation of the Federal Reserve System in 1913, it has used technology to meet the demands of an increasingly complex banking system. In the decade before the Clearing House was founded, banking had become increasingly complex. From 1849 to 1853 –years highlighted by the California gold rush and construction of a national railroad system–the number of New York banks increased from 24 to 57. Settlement procedures were unsophisticated, with banks settling their accounts by employing porters to travel from bank to bank to exchange checks for bags of coin, or “specie.” As the number of banks grew, exchanges became a daily event. The official reckoning of accounts, however, did not take place until Fridays, often resulting in record keeping errors and encouraging abuses. Each day, the porters would gather on the steps of one of the Wall Street banks for their “Porters’ Exchange.” [ 11 ] In 1853, a bank bookkeeper named George D. Lyman proposed in an article that banks send and receive checks at a central office. There was a positive response and The New York Clearing House was organized officially on October 4 of that year. One week later, on October 11 in the basement of 14 Wall Street , 52 banks participated in the first exchange. On its first day, the Clearing House exchanged checks worth $22.6 million. Within 20 years, the average daily clearing topped $100 million. The current daily average approaches $2 Trillion. [ 12 ] The New York Clearing House brought order to what had been a tangled web of exchanges. Specie certificates soon replaced gold as the means of settling balances at the Clearing House, further simplifying the process. Once certificates were exchanged for gold deposited at member banks, porters encountered fewer of the dangers they had faced previously while transporting bags of gold from bank to bank. Certificates relieved the strain on the bank's cash flow, thus reducing the likelihood of a run on deposits. Member banks had to do weekly audits, keep minimum reserve levels and log daily settlement of balances which further assured more ordered, efficient exchanges. [ 11 ] Between 1853 and 1913, the U.S. experienced rapid economic expansion as well as ten financial panics. One of the Clearing House's first challenges was the panic of 1857 . When the panic began, leaders of the member banks met and devised a plan that would shorten the duration of the panic–and more importantly, maintain public confidence in the banking system. When specie payments were suspended, the Clearing House issued loan certificates that could be used to settle accounts. Known as Clearing House Loan Certificates, they were, in effect, quasi-currency, backed not by gold but by discounted county and state bank notes held by member banks. Bearing the words “Payable Through the Clearing House,” a Clearing House Loan Certificate was the joint liability of all the member banks, and thus, in lieu of specie, a most secure form of payment. [ 11 ] The certificates appeared in smaller denominations during the panic of 1873 , and continued to be used as a substitute currency among the member banks for settlement purposes during panics in subsequent decades, including the Panic of 1893 . Although they represented a potential violation of federal law against privately issued currencies, these certificates, as a contemporary observer noted, “performed so valuable a service…in moving the crops and keeping business machinery in motion, that the government…wisely forbore to prosecute.” [ 11 ] In 1913, Congress passed the Federal Reserve Act , thus creating an independent, federal clearing system modeled on the many private clearing houses that had sprung up across America. The new monetary system, with its stringent audits and minimum reserve standards, assumed the role that clearing houses had played. [ 11 ] Since the inception of the Federal Reserve System, the New York Clearing House has concentrated on facilitating the smooth completion of financial transactions by clearing the payments. The clearing process, while highly structured, is in theory, quite simple. Member banks exchange checks, coupons and other certificates of value among themselves, after which the Clearing House records the resulting charges to their accounts. Entries are posted on the books of the Federal Reserve Bank of New York to settle any differences. Settlement is prepared each business day at 10:00 a.m. after about three million pieces of paper have been presented for payment. [ citation needed ] Computers have been performing the payment clearing that once required paper processing. The Clearing House Interbank Payments System (CHIPS) began operation in 1970. The New York Automated Clearing House (NYACH) followed in 1975 and became the Electronics Payment Network in 2000. The Clearing House Electronic Check Clearing System (CHECCS) was added in 1992. [ citation needed ] On July 1, 2004, The New York Clearing House Association announced a name change to The Clearing House Association L. L. C. . [ 13 ]
https://en.wikipedia.org/wiki/New_York_Clearing_House
The New York Genome Center (NYGC) is an independent 501(c)(3) nonprofit academic research institution in New York, New York . [ 3 ] It serves as a multi-institutional collaborative hub focused on the advancement of genomic science and its application to drive novel biomedical discoveries. NYGC's areas of focus include the development of computational and experimental genomic methods and disease-focused research to better understand the genetic basis of cancer, neurodegenerative disease, and neuropsychiatric disease. In 2020, the NYGC also has directed its expertise to COVID-19 genomics research. The Center leverages strengths in whole genome sequencing, genomic analysis, and development of genomic tools to advance genomic discovery. Its faculty hold joint tenure-track appointments at its member institutions and lead independent research labs at the center. NYGC's scientists bring a multidisciplinary and in-depth approach to the field of genomics, conducting research in single cell genomics, gene engineering, population and evolutionary genomics, technology and methods development, statistics, computational biology and bioengineering . [ 4 ] In 2017, co-founder Tom Maniatis was named Evnin Family Scientific Director and chief executive officer of the New York Genome Center. [ 5 ] The center was founded in November 2011 as a collaboration among eleven academic institutions to advance genome research, [ 6 ] based on the vision of Dietrich A. Stephan and leadership from Tom Maniatis [ 7 ] and financial support of $2.5 million from each institution as well as from visionary private philanthropists. [ 6 ] In November 2012, the center recruited Robert B. Darnell as president and Scientific Director, [ 8 ] where he served as CEO and Founding Director, before returning to Rockefeller University and Howard Hughes Medical Institute Investigator [ 9 ] in 2017. [ 10 ] [ 11 ] NYGC formally opened in a multi-story building at 101 Avenue of the Americas . [ 12 ] [ 13 ] on September 19–20, 2013. [ 14 ] The 12 founding institutions (Albert Einstein College of Medicine joined the original 11 institutions in April 2013) [ 15 ] were: Currently, the NYGC has 20 member institutions with Hackensack Meridian Health and Georgetown Lombardi Comprehensive Cancer Center joining in December 2019 as associate members. [ 16 ] and Rutgers Cancer Institute of New Jersey joining as associate member in 2020. The New York Genome Center is a 501(c)(3) nonprofit academic research institution in New York, New York . [ 17 ] Since its inception, the center has raised over $500 million to support its genomic research, including federal and private grants and philanthropy. This includes two joint gifts from the Simons Foundation and the Carson Family Charitable Trust; $100 million in 2016 and $125 million in 2019. [ 18 ] [ 19 ] [ 20 ] The New York Genome Center also receives support from its member institutions, as well as New York State, the Empire State Development Corporation, the Partnership Fund for New York City, and the New York City Economic Development Corporation. Government funding has included a $55 million grant from New York State to support genomic medicine. [ 21 ] In 2016 it received a $40 million grant from the National Human Genome Research Institute to establish a Center for Common Disease Genomics, [ 22 ] and is leading a collaborative, large-scale genomic sequencing program focused on advancing understanding of common diseases, including autism. Additionally, the Center and Weill Cornell Medicine received a National Cancer Institute grant to support a joint cancer genomics data center for the research and clinical interpretation of tumors, a part of the ongoing development of The Cancer Genome Atlas . [ 23 ] The center was also awarded a $13.5 million contract in 2015 to conduct whole genome sequencing and analysis for the National Heart, Lung, and Blood Institute 's TOPMed program. [ 24 ] [ a ] In 2017, New York State committed $17 million in capital improvements for the New York Genome Center to house JLABS@NYC, a life sciences incubator, which opened in summer 2018. [ 25 ] In the last five years, NYGC scientists have published over 200 papers in leading scientific journals. For an up-to-date listing of publications, go to https://www.nygenome.org/lab-groups-overview/publications/
https://en.wikipedia.org/wiki/New_York_Genome_Center
The Computer Graphics Lab is a computer lab located at the New York Institute of Technology (NYIT), founded by Alexander Schure . It was originally located at the "pink building" on the NYIT campus. It has played an important role in the history of computer graphics and animation, as founders of Pixar and Lucasfilm , including Turing Award winners Edwin Catmull and Patrick Hanrahan , began their research there. [ 1 ] It is the birthplace of entirely 3D CGI films. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The lab was initially founded to produce a short high-quality feature film with the project name of The Works . The feature, which was never completed, was a 90-minute feature that was to be the first entirely computer-generated CGI movie. Production mainly focused around DEC PDP and VAX machines. Many of the original CGL team now form the elite of the CG and computer world with members going on to Silicon Graphics , Microsoft , Cisco , NVIDIA and others, including Pixar president, co-founder and Turing laureate Ed Catmull , Pixar co-founder and Microsoft graphics fellow Alvy Ray Smith , Pixar co-founder Ralph Guggenheim , Walt Disney Animation Studios chief scientist Lance Williams , Netscape and Silicon Graphics founder Jim Clark , Tableau co-founder and Turing laureate Pat Hanrahan , Microsoft graphics fellow Jim Blinn , Thad Beier, Oscar and Bafta nominee Jacques Stroweis , Andrew Glassner , and Tom Brigham. Systems programmer Bruce Perens went on to co-found the Open Source Initiative . [ 6 ] [ 7 ] Researchers at the New York Institute of Technology Computer Graphics Lab created the tools that made entirely 3D CGI films possible. [ 8 ] [ 9 ] Among NYIT CG Lab's many innovations was an eight-bit paint system to ease computer animation. [ 10 ] [ 11 ] NYIT CG Lab was regarded as the top computer animation research and development group in the world during the late 70s and early 80s . [ 12 ] [ 13 ] [ 14 ] The lab is presently located at NYIT's Long Island campus, [ 15 ] [ 16 ] and NYIT currently offers a Ph.D. program in Computer Science. [ 17 ]
https://en.wikipedia.org/wiki/New_York_Institute_of_Technology_Computer_Graphics_Lab
The New York Number Theory Seminar is a research seminar devoted to the theory of numbers and related parts of mathematics and physics . The seminar began in 1982 under the founding organizers Harvey Cohn, David and Gregory Chudnovsky , and Melvyn B. Nathanson . It is held at the Graduate Center, CUNY . The New York Number Theory Seminar began in January 1982 and was originally organized by number theorists Harvey Cohn, David and Gregory Chudnovsky , and Melvyn B. Nathanson. [ 1 ] Since the retirement of Cohn, Nathanson is the sole organizer. [ 1 ] The seminar also organizes an annual Workshop on Combinatorial and Additive Number Theory (CANT) at the Graduate Center, CUNY . [ citation needed ] Four volumes of the collected lecture notes of the seminar were published in the Lecture Notes in Mathematics series by Springer-Verlag . [ 1 ] These volumes covered the seminar from 1982 to 1988. [ 1 ] Three additional stand-alone books were published by Springer-Verlag under the title Number Theory , covering the seminar between 1989 and 2003. [ 1 ]
https://en.wikipedia.org/wiki/New_York_Number_Theory_Seminar
The New York Stem Cell Foundation , or NYSCF , is an American non-profit research institute focused on stem cell research, technology development, and funding researchers. [ 1 ] Headquartered on the far west side of Manhattan, New York, NYSCF employs 114 scientists, technicians, engineers, and administrative and other staff, [ 2 ] in addition to funding early career investigators and postdoctoral fellows. Since its inception, NYSCF has raised and invested more than $400 million for stem cell research. [ 2 ] NYSCF was founded in New York City by Susan L. Solomon , a lawyer and entrepreneur, and Mary Elizabeth Bunzel, a former journalist, in 2005 to accelerate stem cell-based approaches to researching and treating type 1 diabetes [ 3 ] and in response to the refusal of the administration of President George W. Bush to make a major investment in stem cell research. [ 1 ] In 2006, NYSCF opened the NYSCF Research Institute – a 500 square foot, one-room independent laboratory located adjacent to Columbia University [ 4 ] [ 5 ] – as a safe-haven to conduct somatic cell nuclear transfer research through a collaboration with Columbia University and Harvard University . [ 6 ] In 2015, NYSCF signed a 20-year lease to move its headquarters and NYSCF Research Institute laboratories to a renovated 42,000 square foot space at 619 West 54th Street [ 7 ] in the former Warner Brothers 'Movie Lab' building, [ 8 ] [ 9 ] rebranded as the Hudson Research Center by commercial real estate developer and building owner Taconic. [ 10 ] Opened in 2017, the new headquarters includes space for a Good Manufacturing Practice facility to manufacture cells for clinical trials. [ 11 ] In 2021, New York City announced it would grant NYSCF $6.5M as one of four applied research and development (R&D) facilities to equip an expansion of its Research Institute. [ 12 ] The NYSCF is currently led by Jennifer J. Raab , former President of Hunter College. Raab was appointed as President & Chief Executive Officer in January 2024. [ 13 ] The board of directors includes Roy Geronemus , Stephen M. Ross , Stephen Scherr , Kay Unger , Paul Goldberger , and Siddhartha Mukherjee . [ 14 ] In 2015, NYSCF described the development of the NYSCF Global Stem Cell Array, a fully-automated system for high-throughput creation, differentiation, and quality control of stem cell lines. [ 15 ] [ 16 ] The system saves five to six times the cost of reagents as compared to manual stem cell derivation. [ 15 ] [ 16 ] The Global Stem Cell Array has been used to conduct research on several patient groups including children with rare diseases, [ 17 ] veterans with post-traumatic stress disorder , [ 18 ] [ 19 ] and Parkinson’s patients. [ 20 ] [ 21 ] NYSCF research resulting in the first human stem cell lines from the cells of patients with amyotrophic lateral sclerosis (ALS), commonly known as Lou Gehrig’s disease, was named as Time magazine's top medical breakthrough of 2008 and the number one breakthrough of the year by Science magazine. [ 22 ] [ 23 ] In 2018, a phase 2 clinical trial for Ezogabine , an epilepsy treatment identified as a possible ALS therapy based on this human stem cell model, was shown to reduce motor neuron excitability in ALS patients. [ 24 ] [ 25 ] In 2013, NYSCF researchers created the first patient-specific bone from stem cells and successfully transplanted the grafts into mice. [ 26 ] [ 27 ] NYSCF researchers created stem cells and derived neurons from a pair of identical twins, one with Parkinson’s disease and one without, finding their neurons differed in how they produce the neurotransmitter dopamine and the enzyme beta-glucocerebrosidase in addition to differing in a molecular signaling pathway. [ 28 ] [ 29 ] NYSCF researchers, in collaboration with researchers at New York University , created astrocytes from human stem cells and showed that in disease-like environments these cells can turn into neuron killers. [ 30 ] [ 31 ] NYSCF researchers developed mitochondrial replacement therapy in 2012, or MRT, a technique to prevent the mother-to-child transmission of mitochondrial diseases [ 32 ] [ 33 ] [ 3 ] [ 34 ] which is now approved for clinical use in the United Kingdom. [ 35 ] With Google Research, NYSCF scientists used the NYSCF Array and artificial intelligence algorithms to identify new cellular features of Parkinson’s disease by analyzing over six million images of skin cells, sampled and expanded from a group of 91 Parkinson’s patients and healthy controls. [ 20 ] [ 36 ] Organizations NYSCF has or is currently partnering with include: Google; [ 36 ] the Icahn School of Medicine at Mount Sinai, the James J. Peters Veterans Affairs Medical Center, and Yale University School of Medicine; [ 19 ] Rush University Medical Center , Harvard Medical School , and Brigham and Women’s Hospital ; [ 37 ] Johns Hopkins School of Medicine and Bloomberg Philanthropies ; [ 38 ] [ 39 ] and Columbia University Medical Center and the National Eye Institute . [ 40 ] NYSCF started a working group "Initiative on Women in Science and Engineering " (IWISE) to address gender equality in science and STEM fields. [ 41 ] The IWISE working group published seven actionable strategies for institutions to promote gender equity in a 2015 Cell Stem Cell paper. [ 41 ] [ 42 ] One of these steps is an Institutional Report Card for Gender Equality, which NYSCF created and requires every NYSCF grant applicant to fill out. The results of a 5-year analysis of these report card submissions were published in a 2019 Cell Stem Cell paper defining the extent of gender parity issues in the academic pipeline and opportunities for improvement. [ 43 ] NYSCF was founded with private philanthropy from individuals and foundations. Notable early funders include former New York City mayor Michael R. Bloomberg ; the investor Stanley Druckenmiller and his wife, Fiona; and a foundation founded by the late hedge-fund manager Julian Robertson . [ 1 ] NYSCF hosts an annual fundraising Gala and Science Fair. Past honorees include Janet and Jerry Zucker , Sanjay Gupta, MD ; Siddhartha Mukherjee, MD, DPhil ; Irving Weissman, MD ; Susan and Stephen Scherr ; Victor Garber ; Derrick Rossi, PhD ; Kizzmekia Corbett, PhD ; Barney Graham, MD, PhD ; Katalin Karikó, PhD ; Drew Weissman, MD, PhD ; Brooke Ellison ; Frank Gehry ; and David Rockwell . [ 44 ] [ 45 ] [ 46 ] [ 47 ] [ 48 ] In 2021 and 2020, NYSCF held virtual Galas both directed by Scott Ellis and hosted by Sanjay Gupta, MD . [ 45 ] In addition to philanthropy, NYSCF also receives funding from grants, partnerships, and collaborations. [ citation needed ] Several awards are administered by the NYSCF. The Robertson Early Career Investigator Awards are given to scientists who have recently launched their own laboratories and provides unrestricted funding over a five-year period to scientists around the world, funded by the Robertson Foundation since 2010. [ 49 ] [ 50 ] The Druckenmiller Postdoctoral Fellows Awards provide three years of unrestricted funding to postdoctoral stem cell researchers in the tri-state area of New York, New Jersey and Connecticut, and are funded by Stanley and Fiona Druckenmiller. [ 51 ] [ 52 ] Notable recipients of NYSCF awards include Feng Zhang , Edward Boyden , Jayaraj Rajagopal , Paola Arlotta , Valentina Greco Lydia W. S. Finley , Shruti Naik , Lauren Orefice , Lauren O’Connell , Elaine Hsiao , Carolyn (Lindy) McBride , Paul J. Tesar , Vanessa Ruta , Edward Chang , Lisa Giocomo , Kay Tye , Dragana Rogulja , Maria Lehtinen , Claire Wyart , Sergiu P. Pașca , Ilana B. Witten , Franziska Michor , and Amy Wagers . [ citation needed ] Official website
https://en.wikipedia.org/wiki/New_York_Stem_Cell_Foundation
The New Zealand Institute of Chemistry (NZIC) was founded in 1931 and is the professional membership organisation for professionals working in the field of chemistry across the education and industry sectors in New Zealand. It is organised into six geographical branches (Auckland, Waikato, Manawatu, Wellington, Canterbury, and Otago) and a number of specialist groups. In 2019 it formed the group Secondary Chemistry Educators of NZ (SCENZ) as the national chemistry teachers’ subject association. The NZIC publishes its own quarterly journal Chemistry in New Zealand . It has been a co-owner society of Chemistry: An Asian Journal since 2008, and is a co-owner of Physical Chemistry Chemical Physics published by the Royal Society of Chemistry (UK). The NZIC holds a national conference every two years with the branches taking turns to host. It is also a co-sponsor of the Pacifichem Congress which is held in Hawaii every five years. The Council of the NZIC consists of an Executive (President, Vice President or Past President and Treasurer), Student Representative, Secretary, and delegates from each of the Branch Committees. Members of the executive are elected annually at the Annual General Meeting. The NZIC is a member of the Federation of Asian Chemical Societies (FACS) [ 1 ] and a constituent organisation of Royal Society Te Apārangi . [ 2 ] Currently there are four categories of membership: Member (including Student Member), Fellow, Honorary Fellow (the highest honour of the NZIC), and school member. A member of the New Zealand Institute of Chemistry is designated with the honorific affix "MNZIC". As the professional body for chemistry in New Zealand, the Institute can promote a member to Fellow of the institute ("FNZIC"). This requires a minimum of 5 years’ professional experience as a Member, and the candidate must have shown a substantial measure of ability or achievement in chemistry. Sarah Masters (past president) This article about a chemistry organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/New_Zealand_Institute_of_Chemistry
The New Zealand Nationally Significant Collections and Databases (NSCDs) are government-funded biological and physical collections or databases that are considered important and significant to New Zealand. They consist of living organisms ( ICMP culture collection ), preserved samples (the Marine Benthic Biology Collection), or data (the New Zealand Geomagnetic Database). Many of the physical collections also have associated databases. The NSCDs were established in 1992 during the breakup of the DSIR and establishment of the Crown Research Institutes . They are currently funded at 19 million NZD per annum though the Strategic Science Investment Fund of MBIE. [ 1 ] [ 2 ] National Groundwater Monitoring Programme [ 6 ] National Petrology Reference Collection and PET Database [ 7 ] New Zealand Fossil Record File [ 8 ] New Zealand Geomagnetic Database [ 9 ] New Zealand National Paleontological Collection and Database [ 10 ] New Zealand Volcano Database [ 11 ] Regional Geological Map Archive and Database [ 12 ] Allan Herbarium (CHR) [ 13 ] International Collection of Microorganisms from Plants (ICMP) [ 14 ] Land Resource Information System [ 15 ] National Vegetation Survey Databank [ 16 ] New Zealand Arthropod Collection , [ 17 ] National Nematode Collection of New Zealand [ 18 ] New Zealand Fungarium (PDD) [ 19 ] Nga Tipu Whakaoranga Ethnobotany Database [ 20 ] Te Kohinga Harakaka o Aotearoa/New Zealand Flax Collection [ 21 ] Landcare Research New Zealand Limited National Climate Database [ 22 ] New Zealand Freshwater Fish Database [ 23 ] NIWA Marine Benthic Biology Collection Water Resources Archive National Institute of Water and Atmospheric Research Limited (NIWA) National Collections of Fruit and Crop Germplasm
https://en.wikipedia.org/wiki/New_Zealand_Nationally_Significant_Collections_and_Databases
A new chemical entity ( NCE ) is, according to the U.S. Food and Drug Administration , a novel, small, chemical molecule drug that is undergoing clinical trials or has received a first approval (not a new use) by the FDA in any other application submitted under section 505(b) of the Federal Food, Drug, and Cosmetic Act . [ 1 ] A new molecular entity ( NME ) is a broader term that encompasses both an NCE or an NBE (New Biological Entity). An active moiety is a molecule or ion , excluding those appended portions of the molecule that cause the drug to be an ester , salt (including a salt with hydrogen or coordination bonds ), or other noncovalent derivative (such as a complex , chelate , or clathrate ) of the molecule, responsible for the physiological or pharmacological action of the drug substance. [ 2 ] An NCE is a molecule developed by the innovator company in the early drug discovery stage, which after undergoing clinical trials could translate into a drug that could be a treatment for some disease. Synthesis of an NCE is the first step in the process of drug development . Once the synthesis of the NCE has been completed, companies have two options before them. They can either go for clinical trials on their own or license the NCE to another company. In the latter option, companies can avoid the expensive and lengthy process of clinical trials, as the licensee company would be conducting further clinical trials and subsequently launching the drug. Companies adopting this model of business would be able to generate high margins as they get a huge one-time payment for the NCE as well as entering into a revenue sharing agreement with the licensee company. Under the Food and Drug Administration Amendments Act of 2007 , all new chemical entities must first be reviewed by an advisory committee before the FDA can approve these products. This article about medicinal chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/New_chemical_entity
The study of Massachusetts Institute of Technology "The Future of Geothermal Energy – Impact of Enhanced Geothermal Systems (EGS) on the United States in the 21st Century" (2006) points out the essential importance of developing an economical deep geothermal boring technology. With old boring technologies, bore price rises exponentially with depth. Thus, finding a boring technology with which the bore price rise would be approximately linear with increasing bore depth is an important challenge. This MIT study characterizes the requirements on new fast and ultra-deep boring technology as follows: There are more than 20 innovative drilling technology such as: laser, spallation, plasma, electron beam, [ 1 ] pallets, enhanced rotary, electric spark and discharge, electric arc, water jet erosion, ultrasonic, chemical, induction, nuclear, forced flame explosive, turbine, high frequency, microwave, heating/cooling stress, electric current and several other. The most promising solutions are mentioned below: One of the most promising approaches in deep drilling field is utilization of electrical plasma . It has lower energy efficiency than some other technologies, but it has several other advantages. Producing boreholes with wide range of diameters or drilling in water environment can be mentioned. The research team from Slovakia has developed drilling concept based on utilization of electrical plasma. The core of the research is held in Research Centre for Deep Drilling which was opened in the premises of Slovak Academy of Sciences . Only a very small number of companies have embraced this method, e.g. GA Drilling , headquartered in Bratislava, Slovakia. No one as of yet has proved the effective use of these techniques in severe conditions. Other complications including on site energy and material transport from 5 to 10 km bores also require refinement to become both technically and economically feasible.
https://en.wikipedia.org/wiki/New_drilling_technologies
New media are communication technologies that enable or enhance interaction between users as well as interaction between users and content. [ 1 ] In the middle of the 1990s, the phrase "new media" became widely used as part of a sales pitch for the influx of interactive CD-ROMs for entertainment and education. [ 2 ] The new media technologies, sometimes known as Web 2.0 , include a wide range of web-related communication tools such as blogs , wikis, online social networking, virtual worlds, and other social media platforms. [ 3 ] The phrase "new media" refers to computational media that share material online and through computers. [ 4 ] New media inspire new ways of thinking about older media. Media do not replace one another in a clear, linear succession, instead evolving in a more complicated network of interconnected feedback loops . [ 5 ] What is different about new media is how they specifically refashion traditional media and how older media refashion themselves to meet the challenges of new media. [ 6 ] Unless they contain technologies that enable digital generative or interactive processes, broadcast television programs , feature films , magazines , and books are not considered to be new media. [ 4 ] In the 1950s, connections between computing and radical art began to grow stronger. It was not until the 1980s that Alan Kay and his co-workers at Xerox PARC began to give the computability of a personal computer to the individual, rather than have a big organization be in charge of this. In the late 1980s and early 1990s, however, we seem to witness a different kind of parallel relationship between social changes and computer design . Although causally unrelated, conceptually, it makes sense that the Cold War and the design of the Web took place at exactly the same time. [ 4 ] Writers and philosophers such as Marshall McLuhan were instrumental in the development of media theory during this period which is now famous declaration in Understanding Media: The Extensions of Man , that " the medium is the message " drew attention to the too often ignored influence media and technology themselves, rather than their "content," have on humans' experience of the world and on society broadly. Until the 1980s, media relied primarily upon print and analog broadcast models such as television and radio . The last twenty-five years have seen the rapid transformation into media which are predicated upon the use of digital technologies such as the Internet and video games . However, these examples are only a small representation of new media. The use of digital computers has transformed the remaining 'old' media, as suggested by the advent of digital television and online publications . Even traditional media forms such as the printing press have been transformed through the application of technologies by using of image manipulation software like Adobe Photoshop and desktop publishing tools. Andrew L. Shapiro argues that the "emergence of new, digital technologies signals a potentially radical shift of who is in control of information, experience and resources". [ 7 ] W. Russell Neuman suggests that whilst the "new media" have technical capabilities to pull in one direction, economic and social forces pull back in the opposite direction. According to Neuman, "We are witnessing the evolution of a universal interconnected network of audio, video, and electronic text communications that will blur the distinction between interpersonal and mass communication; and between public and private communication". [ 8 ] Neuman argues that new media will: Consequently, it has been the contention of scholars such as Douglas Kellner and James Bohman that new media and particularly the Internet will provide the potential for a democratic postmodern public sphere, in which citizens can participate in well informed, non-hierarchical debate pertaining to their social structures. Contradicting these positive appraisals of the potential social impacts of new media are scholars such as Edward S. Herman and Robert McChesney who have suggested that the transition to new media has seen a handful of powerful transnational telecommunications corporations who achieve a level of global influence which was hitherto unimaginable. Scholars have highlighted both the positive and negative potential and actual implications of new media technologies, suggesting that some of the early work into new media studies was guilty of technological determinism – whereby the effects of media were determined by the technology themselves, rather than through tracing the complex social networks which governed the development, funding, implementation and future development of any technology. Based on the argument that people have a limited amount of time to spend on the consumption of different media, displacement theory argue that the viewership or readership of one particular outlet leads to the reduction in the amount of time spent by the individual on another. The introduction of new media, such as the internet, therefore reduces the amount of time individuals would spend on existing "old" media, which could ultimately lead to the end of such traditional media. [ 9 ] Although, there are several ways that new media may be described, Lev Manovich , in an introduction to The New Media Reader , defines new media by using eight propositions: [ 4 ] The rise of new media has increased communication between people all over the world and the Internet. It has allowed people to express themselves through blogs, websites, videos, pictures, and other user-generated media. Terry Flew stated that as new technologies develop, the world becomes more globalized. Globalization is more than the development of activities throughout the world, globalization allows the world to be connected no matter the distance from user to user [ 10 ] and Frances Cairncross expresses this great development as the "death of distance". [ 11 ] New media has established the importance of making friendships through digital social places more prominent than in physical places. [ 12 ] Globalization is generally stated as "more than expansion of activities beyond the boundaries of particular nation states". [ 13 ] New media "radically break the connection between physical place and social place, making physical location much less significant for our social relationships". [ 12 ] However, the changes in the new media environment create a series of tensions in the concept of "public sphere". [ 14 ] According to Ingrid Volkmer, "public sphere" is defined as a process through which public communication becomes restructured and partly disembedded from national political and cultural institutions. [ 15 ] This trend of the globalized public sphere is not only as a geographical expansion form a nation to worldwide, but also changes the relationship between the public, the media and state. [ 15 ] " Virtual communities " are being established online and transcend geographical boundaries, eliminating social restrictions. [ 16 ] Howard Rheingold describes these globalized societies as self-defined networks, which resemble what we do in real life. "People in virtual communities use words on screens to exchange pleasantries and argue, engage in intellectual discourse, conduct commerce, make plans, brainstorm, gossip, feud, fall in love, create a little high art and a lot of idle talk". [ 17 ] For Sherry Turkle "making the computer into a second self, finding a soul in the machine, can substitute for human relationships". [ 18 ] New media has the ability to connect like-minded others worldwide. While this perspective suggests that the technology drives – and therefore is a determining factor – in the process of globalization, arguments involving technological determinism are generally frowned upon by mainstream media studies. [ 19 ] [ 20 ] [ 21 ] Instead academics focus on the multiplicity of processes by which technology is funded, researched and produced, forming a feedback loop when the technologies are used and often transformed by their users, which then feeds into the process of guiding their future development. While commentators such as Manuel Castells [ 22 ] espouse a "soft determinism" [ 21 ] whereby they contend that "Technology does not determine society. Nor does society script the course of technological change, since many factors, including individual inventiveness and entrepreneurialism, intervene in the process of scientific discovery, technical innovation and social applications, so the final outcome depends on a complex pattern of interaction. Indeed the dilemma of technological determinism is probably a false problem, since technology is society and society cannot be understood without its technological tools". [ 22 ] This, however, is still distinct from stating that societal changes are instigated by technological development, which recalls the theses of Marshall McLuhan . [ 23 ] [ 24 ] Manovich [ 25 ] and Castells [ 22 ] have argued that whereas mass media "corresponded to the logic of industrial mass society, which values conformity over individuality," [ 26 ] new media follows the logic of the postindustrial or globalized society whereby "every citizen can construct her own custom lifestyle and select her ideology from a large number of choices. Rather than pushing the same objects to a mass audience, marketing now tries to target each individual separately". [ 26 ] The evolution of virtual communities highlighted many aspects of the real world. Tom Boellstorff's studies of Second Life discuss a term known as "griefing." In Second Life griefing means to consciously upset another user during their experience of the game. [ 27 ] Other users also posed situations of their avatar being raped and sexually harassed. In the real world, these same types of actions are carried out. Virtual communities are a clear demonstration of new media through means of new technological developments. Anthropologist Daniel Miller and sociologist Don Slater discussed online Trinidad culture on online networks through the use of ethnographic studies. The study argues that internet culture does exist and this version of new media cannot eliminate people's relations to their geographic area or national identity. The focus on Trini culture specifically demonstrated the importance of what Trini values and beliefs existed within the page while also representing their identities on the web. [ 28 ] Social movement media has a rich and storied history (see Agitprop ) that has changed at a rapid rate since new media became widely used. [ 29 ] The Zapatista Army of National Liberation of Chiapas , Mexico were the first major movement to make widely recognized and effective use of new media for communiques and organizing in 1994. [ 29 ] Since then, new media has been used extensively by social movements to educate, organize, share cultural products of movements, communicate, coalition build, and more. The WTO Ministerial Conference of 1999 protest activity was another landmark in the use of new media as a tool for social change. The WTO protests used media to organize the original action, communicate with and educate participants, and was used as an alternative media source. [ 30 ] The Indymedia movement also developed out of this action, and has been a great tool in the democratization of information, which is another widely discussed aspect of new media movement. [ 31 ] Some scholars even view this democratization as an indication of the creation of a "radical, socio-technical paradigm to challenge the dominant, neoliberal and technologically determinist model of information and communication technologies." [ 32 ] A less radical view along these same lines is that people are taking advantage of the Internet to produce a grassroots globalization, one that is anti-neoliberal and centered on people rather than the flow of capital. [ 33 ] Chanelle Adams, a feminist blogger for the Bi-Weekly webpaper The Media says that in her "commitment to anti-oppressive feminist work, it seems obligatory for her to stay in the know just to remain relevant to the struggle." In order for Adams and other feminists who work towards spreading their messages to the public, new media becomes crucial towards completing this task, allowing people to access a movement's information instantaneously. Some are also skeptical of the role of new media in social movements. Many scholars point out unequal access to new media as a hindrance to broad-based movements, sometimes even oppressing some within a movement. [ 34 ] Others are skeptical about how democratic or useful it really is for social movements, even for those with access. [ 35 ] New media has also found a use with less radical social movements such as the Free Hugs Campaign . Using websites, blogs, and online videos to demonstrate the effectiveness of the movement itself. Along with this example the use of high volume blogs has allowed numerous views and practices to be more widespread and gain more public attention. Another example is the ongoing Free Tibet Campaign , which has been seen on numerous websites as well as having a slight tie-in with the band Gorillaz in their Gorillaz Bitez clip featuring the lead singer 2D sitting with protesters at a Free Tibet protest. Another social change seen coming from New Media is trends in fashion and the emergence of subcultures such as textspeak , Cyberpunk , and various others. Following trends in fashion and textspeak, New Media also makes way for "trendy" social change. The Ice Bucket Challenge is a recent example of this. All in the name of raising money for ALS (the lethal neurodegenerative disorder also known as Lou Gehrig's disease ), participants are nominated by friends via social media such as Facebook and Twitter to dump a bucket of ice water on themselves, or donate to the ALS Foundation. This became a huge trend through Facebook's tagging tool, allowing nominees to be tagged in the post. The videos appeared on more people's feeds, and the trend spread fast. This trend raised over 100 million dollars for the cause and increased donations by 3,500 percent. A meme, often seen on the internet, is an idea that has been replicated and passed along. Ryan Milner compared this concept to a possible tool for social change. The combination of pictures and texts represent pop polyvocality ("the people's version"). A meme can make more serious conversations less tense while still displaying the situation at sake. [ 36 ] The music industry was affected by the advancement of new media. Throughout years of technology growth, the music industry faced major changes such as the distribution of music from shellac to vinyl, vinyl to 8-tracks, and many more changes over the decades. Beginning in the early 1900s, audio was released on a brittle material called " shellac ." The quality of the sound was very distorted and the delicacy of the physical format resulted in the change to LPs (Long Playing). The first LP was made by Columbia Records in 1948 and later on, RCA developed the EP (Extended Play) which was only seven inches around and had a longer playing time in comparison to the original LP. [ 37 ] The desire for portable music still persisted in this era which projected the launch of the compact cassette. The Cassette was released in 1963 and flourished after post-war where Cassette tapes were being converted into cars for entertainment when traveling. Not long after the development of the cassette did the music industry begin to see forms of piracy. Cassette tapes allowed people to make their own tapes without paying for rights to the music. [ 37 ] This effect caused a major loss in the music industry but it also led to the evolution of mixtapes. As music technologies continued to develop from 8-tracks , floppy discs , CD's , and now, MP3 , so did new media platforms as well. The discovery of MP3's in the 1990s has since changed the world we live in today. At first, MP3 tracks threatened the industry with massive piracy file-to-file sharing networks such as Napster , until laws were established to prevent this. [ 37 ] However, consumption of music is higher than ever before due to streaming platforms like Apple Music, Spotify , Pandora , and many more! [ 37 ] New media has become of interest to the global espionage community as it is easily accessible electronically in database format and can therefore be quickly retrieved and reverse engineered by national governments . Particularly of interest to the espionage community are Facebook and Twitter , two sites where individuals freely divulge personal information that can then be sifted through and archived for the automatic creation of dossiers on both people of interest and the average citizen. [ citation needed ] New media also serves as an important tool for both institutions and nations to promote their interests and values (The contents of such promotion may vary according to different purposes). Some communities consider it an approach of "peaceful evolution" that may erode their own nation's system of values and eventually compromise national security. Interactivity has become a term for a number of new media use options evolving from the rapid dissemination of Internet access points, the digitalization of media, and media convergence . In 1984, Ronald E. Rice defined new media as communication technologies that enable or facilitate user-to-user interactivity and interactivity between user and information. [ 38 ] Such a definition replaces the " one-to-many " model of traditional mass communication with the possibility of a " many-to-many " web of communication. Any individual with the appropriate technology can now produce his or her online media and include images, text, and sound about whatever he or she chooses. [ 39 ] Thus the convergence of new methods of communication with new technologies shifts the model of mass communication, and radically reshapes the ways we interact and communicate with one another. In "What is new media?", Vin Crosbie described three different kinds of communication media. He saw interpersonal media as "one to one", mass media as "one to many", and finally new media as individuation media or "many to many". [ 40 ] Interactivity is present in some programming work, such as video games. It's also viable in the operation of traditional media. In the mid-1990s, filmmakers started using inexpensive digital cameras to create films. It was also the time when moving image technology had developed, which was able to be viewed on computer desktops in full motion. This development of new media technology was a new method for artists to share their work and interact with the big world. Other settings of interactivity include radio and television talk shows, letters to the editor, listener participation in such programs, and computer and technological programming. [ 41 ] Interactive new media has become a true benefit to every one because people can express their artwork in more than one way with the technology that we have today and there is no longer a limit to what we can do with our creativity. Interactivity can be considered a central concept in understanding new media, but different media forms possess, or enable [ 42 ] different degrees of interactivity, [ 43 ] and some forms of digitized and converged media are not in fact interactive at all. Tony Feldman [ 44 ] considers digital satellite television as an example of a new media technology that uses digital compression to dramatically increase the number of television channels that can be delivered, and which changes the nature of what can be offered through the service, but does not transform the experience of television from the user's point of view, and thus lacks a more fully interactive dimension. It remains the case that interactivity is not an inherent characteristic of all new media technologies, unlike digitization and convergence. Terry Flew argues that "the global interactive games industry is large and growing, and is at the forefront of many of the most significant innovations in new media". [ 45 ] Interactivity is prominent in these online video games such as World of Warcraft , The Sims Online and Second Life . These games, which are developments of "new media," allow for users to establish relationships and experience a sense of belonging that transcends traditional temporal and spatial boundaries (such as when gamers logging in from different parts of the world interact). These games can be used as an escape or to act out a desired life. New media have created virtual realities that are becoming virtual extensions of the world we live in. With the creation of Second Life and Active Worlds before it, people have even more control over this virtual world, a world where anything that a participant can think of can become a reality. [ 46 ] Interactive games and platforms such as YouTube and Facebook have led to many viral apps that devise a new way to be interacting with media. The development of GIFs , which dates back to the early stages of webpage development has evolved into a social media phenomenon. [ 47 ] Miltner and Highfield refer to GIFs as being "polysemic." These small looping images represent a specific meaning in cultures and often can be used to display more than one meaning. [ 47 ] Miltner and Highfield argue that GIFs are particularly useful in creating affective or emotional connections of meaning between people. Affect creates an emotional connection of meaning to the person and their culture. [ 47 ] The new media industry shares an open association with many market segments in areas such as software / video game design , television , radio , mobile and particularly movies, advertising and marketing , through which industry seeks to gain from the advantages of two-way dialogue with consumers primarily through the Internet . As a device to source the ideas, concepts, and intellectual properties of the general public, the television industry has used new media and the Internet to expand its resources for new programming and content. The advertising industry has also capitalized on the proliferation of new media with large agencies running multimillion-dollar interactive advertising subsidiaries. Interactive websites and kiosks have become popular. In a number of cases advertising agencies have also set up new divisions to study new media. Public relations firms are also taking advantage of the opportunities in new media through interactive PR practices. Interactive PR practices include the use of social media [ 48 ] to reach a mass audience of online social network users. With the development of the Internet, many new career paths have emerged. Before the rise, many tech jobs were considered boring. The Internet led to creative work that was seen as casual and diverse across gender, race, and sexual orientation. Web design, gaming design, webcasting, blogging, and animation are all creative career paths that came with this rise. At first glance, the field of new media may seem hip, cool, creative, and relaxed. What many don't realize is that working in this field is tiresome. Many of the people that work in this field don't have steady jobs. Work in this field has become project-based. Individuals work project to project for different companies. Most people are not working on one project or contract, but on multiple ones at the same time. Despite working on numerous projects, people in this industry receive low payments, which is highly contrasted with the techy millionaire stereotype. It may seem like a carefree life from the outside, but it is not. New media workers work long hours for little pay and spend up to 20 hours a week looking for new projects to work on. [ 49 ] Based on nationally representative data, a study conducted by Kaiser Family Foundation in five-year intervals in 1998–99, 2003–04, and 2008–09 found that with technology allowing nearly 24-hour media access, the amount of time young people spend with entertainment media has risen dramatically, especially among Black and Hispanic youth. [ 50 ] Today, 8 to 18-year-olds devote an average of 7 hours and 38 minutes (7:38) to using entertainment media in a typical day (more than 53 hours a week) – about the same amount most adults spend at work per day. Since much of that time is spent 'media multitasking' (using more than one medium at a time), they actually manage to spend a total of 10 hours and 45 minutes worth of media content in those 7½ hours per day. According to the Pew Internet & American Life Project , 96% of 18 to 29-year-olds and three-quarters (75%) of teens now own a cell phone, 88% of whom text, with 73% of wired American teens using social networking websites, a significant increase from previous years. [ 51 ] A survey of over 25000 9- to 16-year-olds from 25 European countries found that many underage children use social media sites despite the site's stated age requirements, and many youth lack the digital skills to use social networking sites safely. [ 52 ] The development of the new digital media demands a new educational model by parents and educators. Parental mediation has become a way to manage the children's experiences with Internet, chat, videogames and social network. [ 53 ] A recent trend in internet is YouTubers Generation. YouTubers are young people who offer free video in their personal channel on YouTube. There are videos on games, fashion, food, cinema and music, where they offers tutorial or comments. [ 54 ] The role of cellular phones, such as the iPhone , has created the inability to be in social isolation , and the potential of ruining relationships. The iPhone activates the insular cortex of the brain, which is associated with feelings of love. People show similar feelings to their phones as they would to their friends, family and loved ones. Countless people spend more time on their phones, while in the presence of other people than spending time with the people in the same room or class. [ 55 ] [ dubious – discuss ] In trying to determine the impact of new media on political campaigning and electioneering, the existing research has tried to examine whether new media supplants conventional media. Television is still the dominant news source, but new media's reach is growing. What is known is that new media has had a significant impact on elections and what began in the 2008 presidential campaign established new standards for how campaigns would be run. Since then, campaigns also have their outreach methods by developing targeted messages for specific audiences that can be reached via different social media platforms. Both parties have specific digital media strategies designed for voter outreach. Additionally, their websites are socially connected, engaging voters before, during, and after elections. Email and text messages are also regularly sent to supporters encouraging them to donate and get involved. [ 56 ] Some existing research focuses on the ways that political campaigns, parties, and candidates have incorporated new media into their political strategizing. This is often a multi-faceted approach that combines new and old media forms to create highly specialized strategies. This allows them to reach wider audiences, but also to target very specific subsets of the electorate. They are able to tap into polling data and in some cases harness the analytics of the traffic and profiles on various social media outlets to get real-time data about the kinds of engagement that is needed and the kinds of messages that are successful or unsuccessful. [ 56 ] One body of existing research into the impact of new media on elections investigates the relationship between voters' use of new media and their level of political activity. They focus on areas such as "attentiveness, knowledge, attitudes, orientations, and engagement". [ 56 ] In references a vast body of research, Diana Owen points out that older studies were mixed, while "newer research reveals more consistent evidence of information gain". [ 56 ] Some of that research has shown that there is a connection between the amount and degree of voter engagement and turnout. [ 56 ] However, new media may not have overwhelming effects on either of those. Other research is tending toward the idea that new media has reinforcing effect, that rather than completely altering, by increasing involvement, it "imitates the established pattern of political participation ". [ 57 ] After analyzing the Citizenship Involvement Democracy survey, Taewoo Nam found that "the internet plays a dual role in mobilizing political participation by people not normally politically involved, as well as reinforcing existing offline participation." These findings chart a middle ground between some research that optimistically holds new media up to be an extremely effective or extremely ineffective at fostering political participation. [ 57 ] Terri Towner found, in his survey of college students, that attention to new media increases offline and online political participation particularly for young people. His research shows that the prevalence of online media boosts participation and engagement. His work suggests that "it seems that online sources that facilitate political involvement, communication, and mobilization, particularly campaign websites, social media, and blogs, are the most important for offline political participation among young people". [ 58 ] When gauging effects and implications of new media on the political process, one means of doing so is to look at the deliberations that take place in these digital spaces. [ 59 ] In citing the work of several researchers, Halpern and Gibbs define deliberation to be "the performance of a set of communicative behaviors that promote thorough discussion. and the notion that in this process of communication the individuals involved weigh carefully the reasons for and against some of the propositions presented by others". [ 59 ] The work of Daniel Halpern and Jennifer Gibbs "suggest that although social media may not provide a forum for intensive or in-depth policy debate, it nevertheless provides a deliberative space to discuss and encourage political participation, both directly and indirectly". Their work goes a step beyond that as well though because it shows that some social media sites foster a more robust political debate than do others such as Facebook which includes highly personal and identifiable access to information about users alongside any comments they may post on political topics. This is in contrast to sites like YouTube whose comments are often posted anonymously. [ 59 ] Due to the popularity of new media, social media websites (SMWs) like Facebook and Twitter are becoming increasingly popular among researchers. [ 60 ] Although SMWs present new opportunities, they also represent challenges for researchers interested in studying social phenomena online, since it can be difficult to determine what are acceptable risks to privacy unique to social media. Some scholars argue that standard Institutional Review Board (IRB) procedures provide little guidance on research protocols relating to social media in particular. [ 61 ] As a consequence, three major approaches to research on social media and relevant concerns scholars should consider before engaging in social media research have been identified. One of the major issues for observational research is whether a particular project is considered to involve human subjects. A human subject is one that “is defined by federal regulations as a living individual about whom an investigator obtains data through interaction with the individual or identifiable private information”. [ 47 ] If access to a social media site is public, information is considered identifiable but not private, and information gathering procedures do not require researchers to interact with the original poster of the information, then this does not meet the requirements for human subjects research. Research may also be exempt if the disclosure of participant responses outside the realm of the published research does not subject the participant to civic or criminal liability, damage the participant's reputation, employability or financial standing. [ 47 ] Given these criteria, however, researchers still have considerable leeway when conducting observational research on social media. Many profiles on Facebook, Twitter, LinkedIn, and Twitter are public and researchers are free to use that data for observational research. Users have the ability to change their privacy settings on most social media websites. Facebook, for example, provides users with the ability to restrict who sees their posts through specific privacy settings. [ 61 ] There is also debate about whether requiring users to create a username and password is sufficient to establish whether the data is considered public or private. Historically, Institutional Review Boards considered such websites to be private, [ 47 ] although newer websites like YouTube call this practice into question. For example, YouTube only requires the creation of a username and password to post videos and/or view adult content, but anyone is free to view general YouTube videos and these general videos would not be subject to consent requirements for researchers looking to conduct observational studies. According to Moreno and colleagues, interactive research occurs when "a researcher wishes to access the [social media website] content that is not publicly available". [ 60 ] Because researchers have limited ways of accessing this data, this could mean that a researcher sends a Facebook user a friend request, or follows a user on Twitter in order to gain access to potentially protected tweets. [ 60 ] While it could be argued that such actions would violate a social media user's expectation of privacy, other scholars argued that actions like "friending" or "following" an individual on social media constitutes a "loose tie" relationship and therefore not sufficient to establish a reasonable expectation of privacy since individuals often have friends or followers they have never even met. [ 62 ] Because research on social media occurs online, it is difficult for researchers to observe participant reactions to the informed consent process. For example, when collecting information about activities that are potentially illegal, or recruiting participants from stigmatized populations, this lack of physical proximity could potentially negatively impact the informed consent process. [ 47 ] Another important consideration regards the confidentiality of information provided by participants. While information provided over the internet might be perceived as lower risk, studies that publish direct quotes from study participants might expose them to the risk of being identified via a Google search. [ 47 ]
https://en.wikipedia.org/wiki/New_media
New product development ( NPD ) or product development in business and engineering covers the complete process of launching a new product to the market . Product development also includes the renewal of an existing product and introducing a product into a new market. A central aspect of NPD is product design . New product development is the realization of a market opportunity by making a product available for purchase. [ 1 ] The products developed by an commercial organisation provide the means to generate income . Many technology-intensive organisations exploit technological innovation in a rapidly changing consumer market. [ 2 ] A product can be a tangible asset or intangible. A service or user experience is intangible. In law, sometimes services and other processes are distinguished from "products". NPD requires an understanding of customer needs and wants, the competitive environment, and the nature of the market. Cost, time, and quality are the main variables that drive customer needs. Aiming at these three variables, innovative companies develop continuous practices and strategies to better satisfy customer requirements and to increase their own market share by a regular development of new products. There are many uncertainties and challenges which companies must face throughout the process. [ 2 ] New product development requires an understanding of customer needs and wants, the competitive environment, and the nature of the market. [ 3 ] Cost, time, and quality are the main variables that drive customer needs. Product pricing involves achieving a balance between setting a price which will attract customer interest and achieving an acceptable level of return on the costs and time involved in developing the product. [ 4 ] The product development process typically consists of several activities that firms employ in the complex process of delivering new products to the market. A process management approach is used to provide a structure. Product development often overlaps much with the engineering design process, particularly if the new product being developed involves application of math and/or science. Every new product will pass through a series of stages/phases, including ideation among other aspects of design , as well as manufacturing and market introduction. In highly complex engineered products (e.g. aircraft, automotive, machinery), the NPD process can be likewise complex regarding management of personnel, milestones, and deliverables. Such projects typically use an integrated product team approach. The process for managing large-scale complex engineering products is much slower (often 10-plus years) than that deployed for many types of consumer goods. The development process is articulated and broken down in many different ways, many of which often include the following phases/stages: PHASE 1. Fuzzy front-end (FFE) is the set of activities employed before the more formal and well defined requirements specification is completed. Requirements speak to what the product should do or have, at varying degrees of specificity, in order to meet the perceived market or business need The fuzzy front end (FFE) is the messy "getting started" period of new product engineering development processes. It is also referred to as the "Front End of Innovation", [ 5 ] or "Idea Management". [ 6 ] It is in the front end where the organization formulates a concept of the product to be developed and decides whether or not to invest resources in the further development of an idea. [ 7 ] It is the phase between first consideration of an opportunity and when it is judged ready to enter the structured development process (Kim and Wilemon, 2007; [ 8 ] Koen et al., 2001). [ 5 ] It includes all activities from the search for new opportunities through the formation of a germ of an idea to the development of a precise concept. The Fuzzy Front End phase ends when an organization approves and begins formal development of the concept. Although the fuzzy front end may not be an expensive part of product development, it can consume 50% of development time (see Chapter 3 of the Smith and Reinertsen reference below), [ 9 ] and it is where major commitments are typically made involving time, money, and the product's nature, thus setting the course for the entire project and final end product. Consequently, this phase should be considered as an essential part of development rather than something that happens "before development", and its cycle time should be included in the total development cycle time. Koen et al. (2001) distinguish five different front-end elements (not necessarily in a particular order): [ 5 ] The first element is the opportunity identification. In this element, large or incremental business and technological chances are identified in a more or less structured way. Using the guidelines established here, resources will eventually be allocated to new projects, which then leads to a structured NPPD (New Product & Process Development) strategy. The second element is the opportunity analysis. It is done to translate the identified opportunities into implications for the business and technology specific context of the company. Here extensive efforts may be made to align ideas to target customer groups and do market studies and/or technical trials and research. The third element is the idea genesis, which is described as evolutionary and iterative process progressing from birth to maturation of the opportunity into a tangible idea. The process of the idea genesis can be made internally or come from outside inputs, e.g. a supplier offering a new material/technology or from a customer with an unusual request. The fourth element is the idea selection. Its purpose is to choose whether to pursue an idea by analyzing its potential business value. The fifth element is the idea and technology development. During this part of the front-end, the business case is developed based on estimates of the total available market, customer needs, investment requirements, competition analysis and project uncertainty. Some organizations consider this to be the first stage of the NPPD process (i.e., Stage 0). A universally acceptable definition for Fuzzy Front End or a dominant framework has not been developed so far. [ 10 ] In a glossary by the Product Development and Management Association , [ 11 ] it is mentioned that the fuzzy front end generally consists of three tasks: strategic planning, idea generation, and pre-technical evaluation. These activities are often chaotic, unpredictable, and unstructured. In comparison, the subsequent new product development process is typically structured, predictable, and formal. The term fuzzy front end was first popularized by Smith and Reinertsen (1991). [ 12 ] R.G. Cooper (1988) [ 13 ] it describes the early stages of NPPD as a four-step process in which ideas are generated (I), subjected to a preliminary technical and market assessment (II) and merged to coherent product concepts (III) which are finally judged for their fit with existing product strategies and portfolios (IV). PHASE 2: Product design is the development of both the high-level and detailed-level design of the product: which turns the what of the requirements into a specific how this particular product will meet those requirements. This typically has the most overlap with the engineering design process, but can also include industrial design and even purely aesthetic aspects of design. On the marketing and planning side, this phase ends at pre-commercialization analysis [ clarification needed ] stage. PHASE 3: Product implementation often refers to later stages of detailed engineering design (e.g. refining mechanical or electrical hardware, or software, or goods or other product forms), as well as test process that may be used to validate that the prototype actually meets all design specifications that were established. PHASE 4: Fuzzy back-end or commercialization phase represent the action steps where the production and market launch occur. The front-end marketing phases have been very well researched, with valuable models proposed. Peter Koen et al. provides a five-step front-end activity called front-end innovation: opportunity identification, opportunity analysis, idea genesis, idea selection, and idea and technology development. He also includes an engine in the middle of the five front-end stages and the possible outside barriers that can influence the process outcome. The engine represents the management driving the activities described. The front end of the innovation is the greatest area of weakness in the NPD process. This is mainly because the FFE is often chaotic, unpredictable and unstructured. [ 14 ] Engineering design is the process whereby a technical solution is developed iteratively to solve a given problem. [ 15 ] The design stage is very important because at this stage most of the product life cycle costs are engaged. Previous research shows that 70–80% of the final product quality and 70% of the product entire life-cycle cost are determined in the product design phase, therefore the design-manufacturing interface represent the greatest opportunity for cost reduction. [ 16 ] Design projects last from a few weeks to three years with an average of one year. [ 17 ] Design and commercialization phases usually start a very early collaboration. When the concept design is finished it will be sent to manufacturing plant for prototyping, developing a Concurrent Engineering approach by implementing practices such as QFD , DFM / DFA and more. The output of the design (engineering) is a set of product and process specifications – mostly in the form of drawings, and the output of manufacturing is the product ready for sale. [ 18 ] Basically, the design team will develop drawings with technical specifications representing the future product, and will send it to the manufacturing plant to be executed. Solving product/process fit problems is of high priority in information communication design because 90% of the development effort must be scrapped if any changes are made after the release to manufacturing. [ 18 ] Conceptual models have been designed in order to facilitate a smooth product development process. Booz, Allen and Hamilton Model : One of the first developed models that companies still use in the NPD process is the Booz, Allen and Hamilton (BAH) Model, published in 1982. [ 19 ] This is the best known model because it underlies the NPD systems that have been put forward later. [ 20 ] This model represents the foundation of all the other models that have been developed afterwards. Significant work has been conducted in order to propose better models, but in fact these models can be easily linked to BAH model. The seven steps of the BAH model are: new product strategy , idea generation, screening and evaluation, business analysis, development, testing, and commercialization. Exploratory product development model (ExPD) : Exploratory product development, which often goes by the acronym ExPD, is an emerging approach to new product development. Consultants Mary Drotar and Kathy Morrissey first introduced ExPD at the 2015 Product Development and Management Association annual meeting [ 21 ] and later outlined their approach in the Product Development and Management Association's magazine Visions . [ 21 ] In 2015, Drotar and Morrissey's firm Strategy2Market received the trademark on the term "Exploratory PD". [ 22 ] Rather than going through a set of discrete phases, like the phase-gate process , this exploratory product development process allows organizations to adapt to a landscape of shifting market circumstances and uncertainty by using a more flexible and adaptable product development process for both hardware and software. Where the traditional phase-gate approach works best in a stable market environment, ExPD is more suitable for product development in markets that are unstable and less predictable. Unstable and unpredictable markets cause uncertainty and risk in product development. Many factors contribute to the outcome of a project, and ExPD works on the assumption that the ones that the product team doesn't know enough about or are unaware of are the factors that create uncertainty and risk. The primary goal of ExPD is to reduce uncertainty and risk by reducing the unknown. When organizations adapt quickly to the changing environment (market, technology, regulations, globalization, etc.), they reduce uncertainty and risk, which leads to product success. ExPD is described as a two-pronged, integrated systems approach. Drotar and Morrissey state that product development is complex and needs to be managed as a system, integrating essential elements: strategy, portfolio management, organization/teams/culture, metrics, market/customer understanding, and process. [ 21 ] Drotar and Morrissey have published two books on ExPD. The first, Exploratory Product Development: Executive Version: Adaptable Product Development in a Changing World, was published as an e-book on December 3, 2018. [ 23 ] On September 8, 2022, Drotar and Morrissey published their second book, "Learn & Adapt: ExPD An Adaptive Product Development Process for Rapid Innovation and Risk Reduction, which also highlights their process. [ 24 ] The book has three sections: Overview of ExPD, How to Do It, and Adaptive Practices that Support ExPD. [ 25 ] According to Kirkus, "the (approach the) authors advocate is outwardly focused and premised on being adaptable enough to develop new competencies and create new models as complex situations evolve." Kirkus summarizes the text as "complex and visually stimulating; a serious blueprint for serious strategists." [ 24 ] IDEO approach. The concept adopted by IDEO, a design and consulting firm, is one of the most researched processes in regard to new product development and is a five-step procedure. [ 26 ] These steps are listed in chronological order: Lean start-up approach : Lean startup is a methodology for developing businesses and products that aims to shorten product development cycles and rapidly discover if a proposed business model is viable; this is achieved by adopting a combination of business-hypothesis-driven experimentation, iterative product releases, and validated learning. Lean startup emphasizes customer feedback over intuition and flexibility over planning. This methodology enables recovery from failures more often than traditional ways of product development. Stage-Gate model : [ a ] a pioneer of NPD research in the consumers goods sector is Robert G. Cooper. Over the last two decades [ when? ] he conducted significant work in the area of NPD. The Stage-Gate model, developed in the 1980s, was proposed as a new tool for managing new products development processes. This was mainly applied to the consumers goods industry. [ 28 ] The 2010 APQC benchmarking study reveals that 88% of U.S. businesses employ a Stage-Gate system to manage new products, from idea to launch. In return, the companies that adopt this system are reported to receive benefits such as improved teamwork, improved success rates, earlier detection of failure, a better launch, and even shorter cycle times – reduced by about 30%. [ 29 ] These findings highlight the importance of the Stage-Gate model in the area of new product development. The Stage-Gate model of NPD predevelopment activities are summarised in Phase zero and one, [ 30 ] in respect to earlier definition of predevelopment activities: [ 31 ] These activities yield essential information to make a Go/No-Go to Development decision. These decisions represent the Gates in the Stage-Gate model. The following are types of new product development management structures: Customer-centric new product development focuses on finding new ways to solve customer problems and create more customer-satisfying experiences. Companies often rely on technology, but real success comes from understanding customer needs and values. The most successful companies are the ones that differentiated from others, solved major customer problems, offer a compelling customer value proposition, and engage customers directly, [ 32 ] and systematically. [ 33 ] Systematic new product development focuses on creating a process that allows for the collection, review, and evaluation of new product ideas. [ 34 ] Having a way in which employees, suppliers , distributors , and dealers become involved in finding and developing new products is important to a company's success. [ 35 ] Co-ordinated (and often early) involvement of suppliers in new product development has been seen as productive and generally advocated since the 1980s. [ 36 ] Early supplier involvement (ESI) is generally seen as cost-reducing, although account also needs to be taken of the danger of being locked into a supplier who can press for higher prices. [ 37 ] The Chartered Institute of Procurement & Supply sets out four ESI phases: Cost transparency, backed by appropriate contract terms and supplier audit requirements, can help a business to avoid being disadvantaged by undue supplier pressure. [ 39 ] It is also important for companies to have a process in place for monitoring competitors and their products so that they can stay ahead of them. [ citation needed ] In order to successfully manage the new product development process, companies must have an innovation management system in place. This system helps to ensure that all aspects of new product development are taken into account and that the company is able to track and assess the progress of new products. The innovation management system should also help to foster a culture of innovation within the company, which can help to increase the chances of success for new products. [ citation needed ] Marketing writers Hyman and Wilkins argue that a company's rate of product innovation should fit between the extremes of being so rapid that "its core range decays" and so slow that its product range "become[s] obsolete". [ 32 ] An innovation manager is a senior person appointed to be responsible for implementing and managing the innovation management system. [ citation needed ] They are also responsible for ensuring that all aspects of new product development are taken into account and that the company is able to track and assess the progress of new products. [ citation needed ] A cross-functional innovation management committee is a team of individuals from different company departments, including marketing , engineering, design, manufacturing, and research and development , who are responsible for overseeing and managing the new product development process. This committee helps to ensure that all aspects of new product development are taken into account and that the company is able to track and assess the progress of new products. Companies may get a better overall picture of new product development by putting together a cross-functional team, which can help generate fresh ideas and give assistance in evaluating them. In addition, companies can use virtual product development to help reduce costs. This process uses collaboration technology to remove the need for co-located teams, which can result in significant cost savings such as a reduction in G&A (general & administrative) overhead costs of consulting firms. [ citation needed ] Another way to reduce the cost of new product development is through the use of 24-hour development cycles. This approach allows companies to develop products more quickly and at a lower cost. By using a 24-hour cycle, companies can shorten the time it takes to get a product to market, which can give them a competitive advantage and capability that can be extremely useful in cases where there is a sudden change in market conditions or customer needs. [ citation needed ] In difficult economic times, it is even more important for companies to focus on innovation and new product development. [ 40 ] [ 41 ] Oftentimes, such situations result in a short-sighted focus on cost-cutting and a reduction in spending on new products. However, companies that are able to innovate and create new products will be better positioned for the future. Although counter-intuitive, [ why? ] tough times may even call for a greater emphasis on new product development. This is because companies need to find ways to meet the changing needs and tastes of their customers. Innovation can help a company become more competitive and better positioned for the future. By using a variety of methods, such as virtual product development and 24-hour development cycles, companies can reduce the cost of new product development and improve their chances of success. There are many different roles in a product development team: below is a list of some of the more common ones: [ 42 ] [ 43 ]
https://en.wikipedia.org/wiki/New_product_development
In biological oceanography , new production is supported by nutrient inputs from outside the euphotic zone , especially upwelling of nutrients from deep water, but also from terrestrial and atmosphere sources (as opposite to regenerated production, which is supported by recycling of nutrients in the euphotic zone). New production depends on mixing and vertical advective processes associated with the circulation . [ 1 ] Bio-available nitrogen occurs in the ocean in several forms, including simple ionic forms such as nitrate (NO 3 − ), nitrite (NO 2 − ) and ammonium (NH 4 + ), and more complex organic forms such as urea ((NH2) 2 CO). These forms are utilised by autotrophic phytoplankton to synthesise organic molecules such as amino acids (the building blocks of proteins ). Grazing of phytoplankton by zooplankton and larger organisms transfers this organic nitrogen up the food chain and throughout the marine food-web . When nitrogenous organic molecules are ultimately metabolised by organisms, they are returned to the water column as ammonium (or more complex molecules that are then metabolised to ammonium). This is known as regeneration, since the ammonium can be used by phytoplankton, and again enter the food-web. Primary production fuelled by ammonium in this way is thus referred to as regenerated production. However, ammonium can also be oxidised to nitrate (via nitrite), by the process of nitrification. This is performed by different bacteria in two stages : NH 3 + O 2 → NO 2 − + 3H + + 2e − NO 2 − + H 2 O → NO 3 − + 2H + + 2e − Crucially, this process is believed to only occur in the absence of light (or as some other function of depth). In the ocean, this leads to a vertical separation of nitrification from primary production, and confines it to the aphotic zone . This leads to the situation whereby any nitrate in the water column must be from the aphotic zone, and must have originated from organic material transported there by sinking. Primary production fuelled by nitrate is, therefore, making use of a "fresh" nutrient source rather than a regenerated one. Production by nitrate is thus referred to as new production. [ 2 ] To sum up, production based on nitrate is using nutrient molecules newly arrived from outside the productive layer, it is termed new production. The rate of nitrate utilization remains a good measure of the new production. While if the organic matter is then eaten, respired and the nitrogen excreted as ammonia, its subsequent uptake and re-incorporation in organic matter by phytoplankton is termed recycled (or regenerated) production. The rate of ammonia utilization is, in the same sense, a measure of recycled production. [ 3 ] The use of 15 N-compounds makes it possible to measure the fractions of new nitrogen and regenerated nitrogen associated with the primary production in the sea. [ 2 ]
https://en.wikipedia.org/wiki/New_production
The Newbery-Vautin chlorination process is a method for extracting gold from its ore through the use of chlorination . This process was jointly developed by James Cosmo Newbery and Claude Theodore James Vautin . A process for extracting gold from gold ores containing pyrite using chlorine gas was introduced by Karl Friedrich Plattner (1800—1858), around 1848. [ 1 ] James Newbery and Claude Vautin began work on chlorination, at the United Pyrites Gold Extracting Co. in Bendigo , in 1878. They were awarded Victorian Patent No. 4484, in 1886, for their Newbery-Vautin chlorination process, which was faster than earlier chlorination processes. [ 2 ] [ 3 ] Roasted gold-bearing pyrite concentrates and water were combined in a barrel, to which calcium chloride and sulphuric acid were added. The barrel was then made airtight. Compressed air was used to raise the internal pressure to four atmospheres , and the barrel was set in rotation. The sulphuric acid and calcium chloride reacted to form chlorine gas. The gold contained in the concentrate reacted with the chlorine to form gold chloride , which is a salt soluble in water. Once the extraction had occurred—after around four hours—the liquid containing the gold chloride was separated from the solid residue, by vacuum filtration, and passed through a charcoal filter, where gold precipitated. The charcoal containing the gold precipitate was burnt and the ashes fused with borax to produce gold bullion . [ 2 ] [ 4 ] If there was copper in significant quantity, after charcoal filtering, the remnant liquid—no longer containing gold, but still containing soluable salts of copper—was then run over scrap iron, where the copper precipitated. The Newbery-Vautin process was not useful for extracting any silver, if present, because silver chloride is insoluable in water and so it remained in the solid residue. One of the early adopters of the chlorination process was the large Mount Morgan mine in Queensland . The original chlorination plant installed there used the Newbery-Vautin process. It seems that the process was later modified, by the mining company, leading to some dispute about whose process actually was in use. [ 5 ] [ 6 ] Mount Morgan continued to use a chlorination process for many years. [ 7 ] Other Australian mines that used the Newbery-Vautin process included the Cunningar Proprietary Gold Mining and Chlorination Company, at McMahon's Reef, and the Majors Creek Proprietary Gold-Mining Company's mine at Darque's Reef, near Majors Creek , both in New South Wales, and, in Queensland, the Ravenswood Gold Mining Company. The process was used at mines outside Australia, including at Thames in New Zealand, Denver in Colorado, Vancouver in Canada, at Johannesburg and Barberton in South Africa, and at the Morro Velho mine in Brazil. [ 8 ] The Newbery-Vautin process and other processes based on chlorination were replaced by processes based on cyanidation , which used fewer reagents. Processes that are free of cyanide and emit less toxic byproducts have also been developed. [ 9 ]
https://en.wikipedia.org/wiki/Newbery–Vautin_chlorination_process
In philosophy and mathematics , Newcomb's paradox , also known as Newcomb's problem , is a thought experiment involving a game between two players, one of whom is able to predict the future. Newcomb's paradox was created by William Newcomb of the University of California 's Lawrence Livermore Laboratory . However, it was first analyzed in a philosophy paper by Robert Nozick in 1969 [ 1 ] and appeared in the March 1973 issue of Scientific American , in Martin Gardner 's " Mathematical Games ". [ 2 ] Today it is a much debated problem in the philosophical branch of decision theory . [ 3 ] There is a reliable predictor, a player, and two boxes designated A and B. The player is given a choice between taking only box B or taking both boxes A and B. The player knows the following: [ 4 ] The player does not know what the predictor predicted or what box B contains while making the choice. In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly." [ 4 ] The problem continues to divide philosophers today. [ 5 ] [ 6 ] In a 2020 survey, a modest plurality of professional philosophers chose to take both boxes (39.0% versus 31.2%). [ 7 ] Game theory offers two strategies for this game that rely on different principles: the expected utility principle and the strategic dominance principle. The problem is considered a paradox because two seemingly logical analyses yield conflicting answers regarding which choice maximizes the player's payout. David Wolpert and Gregory Benford point out that paradoxes arise when not all relevant details of a problem are specified, and there is more than one "intuitively obvious" way to fill in those missing details. They suggest that, in Newcomb's paradox, the debate over which strategy is 'obviously correct' stems from the fact that interpreting the problem details differently can lead to two distinct noncooperative games. Each strategy is optimal for one interpretation of the game but not the other. They then derive the optimal strategies for both of the games, which turn out to be independent of the predictor's infallibility, questions of causality , determinism, and free will. [ 4 ] Causality issues arise when the predictor is posited as infallible and incapable of error; Nozick avoids this issue by positing that the predictor's predictions are " almost certainly" correct, thus sidestepping any issues of infallibility and causality. Nozick also stipulates that if the predictor predicts that the player will choose randomly, then box B will contain nothing. This assumes that inherently random or unpredictable events would not come into play anyway during the process of making the choice, such as free will or quantum mind processes. [ 8 ] However, these issues can still be explored in the case of an infallible predictor. Under this condition, it seems that taking only B is the correct option. This analysis argues that we can ignore the possibilities that return $0 and $1,001,000, as they both require that the predictor has made an incorrect prediction, and the problem states that the predictor is never wrong. Thus, the choice becomes whether to take both boxes with $1,000 or to take only box B with $1,000,000 – so taking only box B is always better. William Lane Craig has suggested that, in a world with perfect predictors (or time machines , because a time machine could be used as a mechanism for making a prediction), retrocausality can occur. [ 9 ] The chooser's choice can be said to have caused the predictor's prediction. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will and choosers will do whatever they are fated to do. Taken together, the paradox is a restatement of the old contention that free will and determinism are incompatible, since determinism enables the existence of perfect predictors. Put another way, this paradox can be equivalent to the grandfather paradox ; the paradox presupposes a perfect predictor, implying the "chooser" is not free to choose, yet simultaneously presumes a choice can be debated and decided. This suggests to some that the paradox is an artifact of these contradictory assumptions. [ 10 ] Gary Drescher argues in his book Good and Real that the correct decision is to take only box B, by appealing to a situation he argues is analogous – a rational agent in a deterministic universe deciding whether or not to cross a potentially busy street. [ 11 ] Andrew Irvine argues that the problem is structurally isomorphic to Braess's paradox , a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds. [ 12 ] Simon Burgess has argued that the problem can be divided into two stages: the stage before the predictor has gained all the information on which the prediction will be based and the stage after it. While the player is still in the first stage, they are presumably able to influence the predictor's prediction, for example, by committing to taking only one box. So players who are still in the first stage should simply commit themselves to one-boxing. Burgess readily acknowledges that those who are in the second stage should take both boxes. As he emphasises, however, for all practical purposes that is beside the point; the decisions "that determine what happens to the vast bulk of the money on offer all occur in the first [stage]". [ 13 ] So players who find themselves in the second stage without having already committed to one-boxing will invariably end up without the riches and without anyone else to blame. In Burgess's words: "you've been a bad boy scout"; "the riches are reserved for those who are prepared". [ 14 ] Burgess has stressed that – pace certain critics (e.g., Peter Slezak) – he does not recommend that players try to trick the predictor. Nor does he assume that the predictor is unable to predict the player's thought process in the second stage. [ 15 ] Quite to the contrary, Burgess analyses Newcomb's paradox as a common cause problem, and he pays special attention to the importance of adopting a set of unconditional probability values – whether implicitly or explicitly – that are entirely consistent at all times. To treat the paradox as a common cause problem is simply to assume that the player's decision and the predictor's prediction have a common cause. (That common cause may be, for example, the player's brain state at some particular time before the second stage begins.) It is also notable that Burgess highlights a similarity between Newcomb's paradox and the Kavka's toxin puzzle . In both problems one can have a reason to intend to do something without having a reason to actually do it. Recognition of that similarity, however, is something that Burgess actually credits to Andy Egan. [ 16 ] Newcomb's paradox can also be related to the question of machine consciousness , specifically if a perfect simulation of a person's brain will generate the consciousness of that person. [ 17 ] Suppose we take the predictor to be a machine that arrives at its prediction by simulating the brain of the chooser when confronted with the problem of which box to choose. If that simulation generates the consciousness of the chooser, then the chooser cannot tell whether they are standing in front of the boxes in the real world or in the virtual world generated by the simulation in the past. The "virtual" chooser would thus tell the predictor which choice the "real" chooser is going to make, and the chooser, not knowing whether they are the real chooser or the simulation, should take only the second box. Newcomb's paradox is related to logical fatalism in that they both suppose absolute certainty of the future. In logical fatalism, this assumption of certainty creates circular reasoning ("a future event is certain to happen, therefore it is certain to happen"), while Newcomb's paradox considers whether the participants of its game are able to affect a predestined outcome. [ 18 ] Many thought experiments similar to or based on Newcomb's problem have been discussed in the literature. [ 1 ] For example, a quantum-theoretical version of Newcomb's problem in which box B is entangled with box A has been proposed. [ 19 ] Another related problem is the meta-Newcomb problem. [ 20 ] The setup of this problem is similar to the original Newcomb problem. However, the twist here is that the predictor may elect to decide whether to fill box B after the player has made a choice, and the player does not know whether box B has already been filled. There is also another predictor: a "meta-predictor" who has reliably predicted both the players and the predictor in the past, and who predicts the following: "Either you will choose both boxes, and the predictor will make its decision after you, or you will choose only box B, and the predictor will already have made its decision." In this situation, a proponent of choosing both boxes is faced with the following dilemma: if the player chooses both boxes, the predictor will not yet have made its decision, and therefore a more rational choice would be for the player to choose box B only. But if the player so chooses, the predictor will already have made its decision, making it impossible for the player's decision to affect the predictor's decision.
https://en.wikipedia.org/wiki/Newcomb's_paradox
The Newcomb Cleveland Prize of the American Association for the Advancement of Science (AAAS) is annually awarded to author(s) of outstanding scientific paper published in the Research Articles or Reports sections of Science . Established in 1923, funded by Newcomb Cleveland who remained anonymous until his death in 1951, and for this period it was known as the AAAS Thousand Dollar Prize . "The prize was inspired by Mr. Cleveland's belief that it was the scientist who counted and who needed the encouragement an unexpected monetary award could give." [ 1 ] The present rules were instituted in 1975, previously it had gone to the author(s) of noteworthy papers, representing an outstanding contribution to science, presented in a regular session, sectional or societal, during the AAAS Annual Meeting. It is now sponsored by the Fodor Family Trust [ 2 ] and includes a prize of $ 25,000. [ 3 ] List of winners [ 2 ] [ 4 ] The DNA Binding Domain of the Rat Liver Nuclear Protein C/EBP Is Bipartite Molecular genetics of inherited variation in human color vision Genetic Transformation of Drosophila with Transposable Element Vectors A Neural Map of Auditory Space in the Owl
https://en.wikipedia.org/wiki/Newcomb_Cleveland_Prize
In mathematics and phylogenetics , Newick tree format (or Newick notation or New Hampshire tree format ) is a way of representing graph-theoretical trees with edge lengths using parentheses and commas. It was adopted by James Archie, William H. E. Day, Joseph Felsenstein , Wayne Maddison , Christopher Meacham, F. James Rohlf, and David Swofford, at two meetings in 1986, the second of which was at Newick's restaurant [ 1 ] in Dover , New Hampshire, US. The adopted format is a generalization of the format developed by Meacham in 1984 for the first tree-drawing programs in Felsenstein's PHYLIP package. [ 2 ] The following tree: could be represented in Newick format in several ways Newick format is typically used for tools like PHYLIP and is a minimal definition for a phylogenetic tree . When an unrooted tree is represented in Newick notation, an arbitrary node is chosen as its root. Whether rooted or unrooted, typically a tree's representation is rooted on an internal node and it is rare (but legal) to root a tree on a leaf node. A rooted binary tree that is rooted on an internal node has exactly two immediate descendant nodes for each internal node. An unrooted binary tree that is rooted on an arbitrary internal node has exactly three immediate descendant nodes for the root node, and each other internal node has exactly two immediate descendant nodes. A binary tree rooted from a leaf has at most one immediate descendant node for the root node, and each internal node has exactly two immediate descendant nodes. A grammar for parsing the Newick format (roughly based on [ 3 ] ): Note, "|" separates alternatives. Whitespace (spaces, tabs, carriage returns, and linefeeds) within number is prohibited. Whitespace within string is often prohibited. Whitespace elsewhere is ignored. Sometimes the Name string must be of a specified fixed length; otherwise the punctuation characters from the grammar (semicolon, parentheses, comma, and colon) are prohibited. The Tree → Subtree ";" production is instead the Tree → Branch ";" production in those cases where having the entire tree descended from nowhere is permitted; this captures the replaced production as well because Length can be empty . Note that when a tree having more than one leaf is rooted from one of its leaves, a representation that is rarely seen in practice, the root leaf is characterized as an Internal node by the above grammar. Generally, a root node labeled as Internal should be construed as actually internal if and only if it has at least two Branch es in its BranchSet . One can make a grammar that formalizes this distinction by replacing the above Tree production rule with The first RootLeaf production is for a tree with exactly one leaf. The second RootLeaf production is for rooting a tree from one of its two or more leaves. The New Hampshire X (NHX) format is an extension to Newick that adds key-value data (gene duplication, etc.) to Newick nodes. This is done by putting the additional data in square brackets [&&NHX: key = value :...] in the node labels. The brackets are used because they represent comments in the Nexus file format, so any parser not understanding these additional information will ignore them. [ 4 ] While the standard Newick notation is limited to phylogenetic trees, Extended Newick (Perl Bio::PhyloNetwork) can be used to encode explicit phylogenetic networks. [ 5 ] In a phylogenetic network , which is a generalization of a phylogenetic tree , a node either represents a divergence event ( cladogenesis ) or a reticulation event such as hybridization , introgression , horizontal (lateral) gene transfer or recombination . Nodes that represent a reticulation event are duplicated, annotated by introducing the # symbol into the Newick format, and numbered consecutively (using integer values starting with 1). For example, if leaf Y is the product of hybridisation (x) between lineages leading to C and D in the tree above, A B C Y D A B C Y D one can express this situation by defining two trees in standard Newick notation or in extended Newick notation The x#H1 here is a hybrid node. It will be joined by the program into a single node when drawn. This is the picture drawn by Dendroscope for this example: The production rules above is modified by the following for labelling hybrid nodes (in general, nodes representing reticulation events): [ 6 ] In the visualization of LGT events, for a given reticulate node, one incoming edge is usually drawn as an "acceptor" edge and all other incoming edges are drawn as "transfer" edges. Some programs (e.g. Dendroscope and SplitsTree ) allow exactly one copy of the reticulate node to be labeled with ## to indicate that it corresponds to the acceptor edge. Extended Newick is backward-compatible: a hybrid node would simply be interpreted as a few strangely-named nodes for legacy parsers. The Rich Newick format, also known as the Rice Newick format, is a further extension of Extended Newick. [ 7 ] It adds support for: Some other programs, like NWX, uses comments starting with & to encode additional information in an ad hoc manner: [ 8 ] Many tools have been published to visualize Newick tree data. Specific examples include the ETE toolkit ("Environment for Tree Exploration") [ 9 ] and T-REX . [ 10 ] Phylogenetic software packages such as SplitsTree and the tree-viewer Dendroscope as well as the online tree viewing tool IcyTree can handle standard and extended Newick notation, while the phylogenetic network software PhyloNet makes use of both the Extended Newick and Rich Newick format.
https://en.wikipedia.org/wiki/Newick_format
In theoretical computer science , specifically in term rewriting , Newman's lemma , also commonly called the diamond lemma , is a criterion to prove that an abstract rewriting system is confluent . It states that local confluence is a sufficient condition for confluence, provided that the system is also terminating . This is useful since local confluence is usually easier to verify than confluence. The lemma was originally proved by Max Newman in 1942. [ 1 ] [ 2 ] A considerably simpler proof (given below) was proposed by Gérard Huet . [ 3 ] A number of other proofs exist. [ 4 ] The lemma is purely combinatorial and applies to any relation. Owing to the context where it is commonly applied, it is stated below in the terminology of abstract rewriting systems (this is simply a set whose elements are called terms, equipped with a relation → {\displaystyle \to } called reduction, and see the corresponding article for definitions of termination, confluence, local confluence and normal forms). Newman's lemma: [ 5 ] [ 6 ] [ 7 ] [ 8 ] If an abstract rewriting system is terminating and locally confluent, then it is confluent. As a corollary, every term has a unique normal form. Proof: We prove by well-founded induction on u {\displaystyle u} along → {\displaystyle \to } that every diagram can be extended to a diagram where the dotted arrows represent sequences of arbitrarily many reductions by → {\displaystyle \to } . If u = v {\displaystyle u=v} or u = w {\displaystyle u=w} , this is trivial. Otherwise, we have at least one reduction on each side: By local confluence, this diagram can be extended to: then by induction hypothesis on v 0 {\displaystyle v_{0}} : and finally, by induction hypothesis on w 0 {\displaystyle w_{0}} :
https://en.wikipedia.org/wiki/Newman's_lemma
A Newman projection is a drawing that helps visualize the 3-dimensional structure of a molecule. [ 1 ] This projection most commonly sights down a carbon-carbon bond, making it a very useful way to visualize the stereochemistry of alkanes. A Newman projection visualizes the conformation of a chemical bond from front to back, with the front atom represented by the intersection of three lines (a dot) and the back atom as a circle. The front atom is called proximal , while the back atom is called distal . This type of representation clearly illustrates the specific dihedral angle between the proximal and distal atoms. [ 2 ] This projection is named after American chemist Melvin Spencer Newman , who introduced it in 1952 as a partial replacement for Fischer projections , which are unable to represent conformations and thus conformers properly. [ 3 ] [ 4 ] This diagram style is an alternative to a sawhorse projection , which views a carbon–carbon bond from an oblique angle, or a wedge-and-dash style, such as a Natta projection . These other styles can indicate the bonding and stereochemistry , but not as much conformational detail. A Newman projection can also be used to study cyclic molecules , [ 3 ] such as the chair conformation of cyclohexane : Because of the free rotation around single bonds, there are various conformations for a single molecule. [ 1 ] Up to six unique conformations may be drawn for any given chemical bond. Each conformation is drawn by rotation of either the proximal or distal atom 60 degrees. Of these six conformations, three will be in a staggered conformation, while the other three will be in an eclipsed conformation. These six conformations can be represented in a relative energy diagram. A staggered projection appears to have the surrounding species equidistant from each other. This kind of conformation tends to experience both anti and gauche interactions. [ 5 ] Anti interactions refer to the molecules (usually of the same type) sitting exactly opposite of each other at 180° on the Newman projection. [ 5 ] Gauche interactions refer to molecules (also usually of the same type) being 60° from each other on a Newman projection. Anti interactions experience less steric strain than gauche interactions, but both experience less steric strain than the eclipsed conformation. [ 5 ] An eclipsed projection appears to have the surrounding species almost on top of each other. In reality, these species are in line with each other, but are drawn slightly staggered to help format the projection onto paper. These types of conformations are generally higher in energy due to increased bond strain. [ 1 ] However, this strain can be somewhat lower if a hydrogen is eclipsed over a larger species, as opposed to two large species eclipsed over each other. [ 1 ]
https://en.wikipedia.org/wiki/Newman_projection
In general relativity , the Newman–Janis algorithm (NJA) is a complexification technique for finding exact solutions to the Einstein field equations . In 1964, Newman and Janis showed that the Kerr metric could be obtained from the Schwarzschild metric by means of a coordinate transformation and allowing the radial coordinate to take on complex values. Originally, no clear reason for why the algorithm works was known. [ 1 ] In 1998, Drake and Szekeres gave a detailed explanation of the success of the algorithm and proved the uniqueness of certain solutions. In particular, the only perfect fluid solution generated by NJA is the Kerr metric and the only Petrov type D solution is the Kerr–Newman metric . [ 2 ] The algorithm works well on ƒ ( R ) and Einstein–Maxwell–Dilaton theories, but doesn't return expected results on Braneworld and Born–Infield theories. [ 3 ] This relativity -related article is a stub . You can help Wikipedia by expanding it . This mathematics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Newman–Janis_algorithm
The Newman–Kwart rearrangement is a type of rearrangement reaction in which the aryl group of an O -aryl thiocarbamate , ArOC(=S)NMe 2 , migrates from the oxygen atom to the sulfur atom, forming an S -aryl thiocarbamate, ArSC(=O)NMe 2 . [ 1 ] [ 2 ] [ 3 ] The reaction is named after its discoverers, Melvin Spencer Newman [ 4 ] and Harold Kwart. [ 5 ] The reaction is a manifestation of the double bond rule . The Newman–Kwart reaction represents a useful synthetic tool for the preparation of thiophenol derivatives. The Newman–Kwart rearrangement is intramolecular . It is generally believed to be a concerted process, proceeding via a four-membered cyclic transition state (rather than a two-step process passing through a discrete reactive intermediate ). [ 3 ] [ 6 ] The enthalpy of activation for this transition state is generally quite high for typical substrates (Δ H ‡ ~ 30 to 40 kcal/mol), necessitating high reaction temperatures (200 to 300 °C, Ph 2 O as solvent or heat). [ 7 ] A Pd-catalyzed process [ 2 ] and conditions under photoredox catalysis [ 8 ] (both proceeding through complex multistep mechanisms) are known. These catalytic processes allow for much milder reaction conditions to be used (100 °C for Pd catalysis, ambient temperature for photoredox). The Newman–Kwart rearrangement is an important prelude to the synthesis of thiophenols . A phenol ( 1 ) is deprotonated with a base followed by treatment with a thiocarbamoyl chloride ( 2 ) to form an O -aryl thiocarbamate ( 3 ). Heating 3 to around 250 °C causes it undergo Newman–Kwart rearrangement to an S -aryl thiocarbamate ( 4 ). Alkaline hydrolysis or similar cleavage yields a thiophenol ( 5 ). [ 6 ] [ 9 ]
https://en.wikipedia.org/wiki/Newman–Kwart_rearrangement
The Newman–Penrose ( NP ) formalism [ 1 ] [ 2 ] is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism , [ 3 ] where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars , derived from the Weyl tensor , are often used. In particular, it can be shown that one of these scalars— Ψ 4 {\displaystyle \Psi _{4}} in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system. [ 4 ] Newman and Penrose introduced the following functions as primary quantities using this tetrad: [ 1 ] [ 2 ] In many situations—especially algebraically special spacetimes or vacuum spacetimes—the Newman–Penrose formalism simplifies dramatically, as many of the functions go to zero. This simplification allows for various theorems to be proven more easily than using the standard form of Einstein's equations. In this article, we will only employ the tensorial rather than spinorial version of NP formalism, because the former is easier to understand and more popular in relevant papers. One can refer to ref. [ 5 ] for a unified formulation of these two versions. The formalism is developed for four-dimensional spacetime, with a Lorentzian-signature metric. At each point, a tetrad (set of four vectors) is introduced. The first two vectors, ℓ μ {\displaystyle \ell ^{\mu }} and n μ {\displaystyle n^{\mu }} are just a pair of standard (real) null vectors such that ℓ a n a = − 1 {\displaystyle \ell ^{a}n_{a}=-1} . For example, we can think in terms of spherical coordinates, and take ℓ a {\displaystyle \ell ^{a}} to be the outgoing null vector, and n a {\displaystyle n^{a}} to be the ingoing null vector. A complex null vector is then constructed by combining a pair of real, orthogonal unit space-like vectors. In the case of spherical coordinates, the standard choice is m μ = 1 2 ( θ ^ + i ϕ ^ ) μ . {\displaystyle m^{\mu }={\frac {1}{\sqrt {2}}}\left({\hat {\theta }}+i{\hat {\phi }}\right)^{\mu }\ .} The complex conjugate of this vector then forms the fourth element of the tetrad. Two sets of signature and normalization conventions are in use for NP formalism: { ( + , − , − , − ) ; ℓ a n a = 1 , m a m ¯ a = − 1 } {\displaystyle \{(+,-,-,-);\ell ^{a}n_{a}=1\,,m^{a}{\bar {m}}_{a}=-1\}} and { ( − , + , + , + ) ; ℓ a n a = − 1 , m a m ¯ a = 1 } {\displaystyle \{(-,+,+,+);\ell ^{a}n_{a}=-1\,,m^{a}{\bar {m}}_{a}=1\}} . The former is the original one that was adopted when NP formalism was developed [ 1 ] [ 2 ] and has been widely used [ 6 ] [ 7 ] in black-hole physics, gravitational waves and various other areas in general relativity. However, it is the latter convention that is usually employed in contemporary study of black holes from quasilocal perspectives [ 8 ] (such as isolated horizons [ 9 ] and dynamical horizons [ 10 ] [ 11 ] ). In this article, we will utilize { ( − , + , + , + ) ; ℓ a n a = − 1 , m a m ¯ a = 1 } {\displaystyle \{(-,+,+,+);\ell ^{a}n_{a}=-1\,,m^{a}{\bar {m}}_{a}=1\}} for a systematic review of the NP formalism (see also refs. [ 12 ] [ 13 ] [ 14 ] ). It's important to note that, when switching from { ( + , − , − , − ) , ℓ a n a = 1 , m a m ¯ a = − 1 } {\displaystyle \{(+,-,-,-)\,,\ell ^{a}n_{a}=1\,,m^{a}{\bar {m}}_{a}=-1\}} to { ( − , + , + , + ) , ℓ a n a = − 1 , m a m ¯ a = 1 } {\displaystyle \{(-,+,+,+)\,,\ell ^{a}n_{a}=-1\,,m^{a}{\bar {m}}_{a}=1\}} , definitions of the spin coefficients, Weyl-NP scalars Ψ i {\displaystyle \Psi _{i}} and Ricci-NP scalars Φ i j {\displaystyle \Phi _{ij}} need to change their signs; this way, the Einstein-Maxwell equations can be left unchanged. In NP formalism, the complex null tetrad contains two real null (co)vectors { ℓ , n } {\displaystyle \{\ell \,,n\}} and two complex null (co)vectors { m , m ¯ } {\displaystyle \{m\,,{\bar {m}}\}} . Being null (co)vectors, self -normalization of { ℓ , n } {\displaystyle \{\ell \,,n\}} naturally vanishes, ℓ a ℓ a = n a n a = m a m a = m ¯ a m ¯ a = 0 , {\displaystyle \ell _{a}\ell ^{a}=n_{a}n^{a}=m_{a}m^{a}={\bar {m}}_{a}{\bar {m}}^{a}=0,} so the following two pairs of cross -normalization are adopted ℓ a n a = − 1 = ℓ a n a , m a m ¯ a = 1 = m a m ¯ a , {\displaystyle \ell _{a}n^{a}=-1=\ell ^{a}n_{a}\,,\quad m_{a}{\bar {m}}^{a}=1=m^{a}{\bar {m}}_{a}\,,} while contractions between the two pairs are also vanishing, ℓ a m a = ℓ a m ¯ a = n a m a = n a m ¯ a = 0. {\displaystyle \ell _{a}m^{a}=\ell _{a}{\bar {m}}^{a}=n_{a}m^{a}=n_{a}{\bar {m}}^{a}=0.} Here the indices can be raised and lowered by the global metric g a b {\displaystyle g_{ab}} which in turn can be obtained via g a b = − ℓ a n b − n a ℓ b + m a m ¯ b + m ¯ a m b , g a b = − ℓ a n b − n a ℓ b + m a m ¯ b + m ¯ a m b . {\displaystyle {\begin{aligned}g_{ab}&=-\ell _{a}n_{b}-n_{a}\ell _{b}+m_{a}{\bar {m}}_{b}+{\bar {m}}_{a}m_{b}\,,\\[1ex]g^{ab}&=-\ell ^{a}n^{b}-n^{a}\ell ^{b}+m^{a}{\bar {m}}^{b}+{\bar {m}}^{a}m^{b}\,.\end{aligned}}} In keeping with the formalism's practice of using distinct unindexed symbols for each component of an object, the covariant derivative operator ∇ a {\displaystyle \nabla _{a}} is expressed using four separate symbols ( D , Δ , δ , δ ¯ {\displaystyle D,\Delta ,\delta ,{\bar {\delta }}} ) which name a directional covariant derivative operator for each tetrad direction. Given a linear combination of tetrad vectors, X a = a ℓ a + b n a + c m a + d m ¯ a {\displaystyle X^{a}=\mathrm {a} \ell ^{a}+\mathrm {b} n^{a}+\mathrm {c} m^{a}+\mathrm {d} {\bar {m}}^{a}} , the covariant derivative operator in the X a {\displaystyle X^{a}} direction is ∇ X = X a ∇ a = ( a D + b Δ + c δ + d δ ¯ ) {\displaystyle \nabla _{X}=X^{a}\nabla _{a}=(\mathrm {a} D+\mathrm {b} \Delta +\mathrm {c} \delta +\mathrm {d} {\bar {\delta }})} . The operators are defined as D := ∇ ℓ = ℓ a ∇ a , Δ := ∇ n = n a ∇ a , δ := ∇ m = m a ∇ a , δ ¯ := ∇ m ¯ = m ¯ a ∇ a , {\displaystyle {\begin{aligned}D&:=\nabla _{\boldsymbol {\ell }}=\ell ^{a}\nabla _{a}\,,&\Delta &:=\nabla _{\boldsymbol {n}}=n^{a}\nabla _{a}\,,\\[1ex]\delta &:=\nabla _{\boldsymbol {m}}=m^{a}\nabla _{a}\,,&{\bar {\delta }}&:=\nabla _{\boldsymbol {\bar {m}}}={\bar {m}}^{a}\nabla _{a}\,,\end{aligned}}} which reduce to D = ℓ a ∂ a , Δ = n a ∂ a , δ = m a ∂ a , δ ¯ = m ¯ a ∂ a {\displaystyle D=\ell ^{a}\partial _{a}\,,\Delta =n^{a}\partial _{a}\,,\delta =m^{a}\partial _{a}\,,{\bar {\delta }}={\bar {m}}^{a}\partial _{a}} when acting on scalar functions. In NP formalism, instead of using index notations as in orthogonal tetrads , each Ricci rotation coefficient γ i j k {\displaystyle \gamma _{ijk}} in the null tetrad is assigned a lower-case Greek letter, which constitute the 12 complex spin coefficients (in three groups), κ := − m a D ℓ a = − m a ℓ b ∇ b ℓ a , τ := − m a Δ ℓ a = − m a n b ∇ b ℓ a , σ := − m a δ ℓ a = − m a m b ∇ b ℓ a , ρ := − m a δ ¯ ℓ a = − m a m ¯ b ∇ b ℓ a ; π := m ¯ a D n a = m ¯ a ℓ b ∇ b n a , ν := m ¯ a Δ n a = m ¯ a n b ∇ b n a , μ := m ¯ a δ n a = m ¯ a m b ∇ b n a , λ := m ¯ a δ ¯ n a = m ¯ a m ¯ b ∇ b n a ; {\displaystyle {\begin{aligned}\kappa &:=-m^{a}D\ell _{a}=-m^{a}\ell ^{b}\nabla _{b}\ell _{a}\,,&\tau &:=-m^{a}\Delta \ell _{a}=-m^{a}n^{b}\nabla _{b}\ell _{a}\,,\\[1ex]\sigma &:=-m^{a}\delta \ell _{a}=-m^{a}m^{b}\nabla _{b}\ell _{a}\,,&\rho &:=-m^{a}{\bar {\delta }}\ell _{a}=-m^{a}{\bar {m}}^{b}\nabla _{b}\ell _{a}\,;\\[1ex]\pi &:={\bar {m}}^{a}Dn_{a}={\bar {m}}^{a}\ell ^{b}\nabla _{b}n_{a}\,,&\nu &:={\bar {m}}^{a}\Delta n_{a}={\bar {m}}^{a}n^{b}\nabla _{b}n_{a}\,,\\[1ex]\mu &:={\bar {m}}^{a}\delta n_{a}={\bar {m}}^{a}m^{b}\nabla _{b}n_{a}\,,&\lambda &:={\bar {m}}^{a}{\bar {\delta }}n_{a}={\bar {m}}^{a}{\bar {m}}^{b}\nabla _{b}n_{a}\,;\end{aligned}}} ε := − 1 2 ( n a D ℓ a − m ¯ a D m a ) = − 1 2 ( n a ℓ b ∇ b ℓ a − m ¯ a ℓ b ∇ b m a ) , γ := − 1 2 ( n a Δ ℓ a − m ¯ a Δ m a ) = − 1 2 ( n a n b ∇ b ℓ a − m ¯ a n b ∇ b m a ) , β := − 1 2 ( n a δ ℓ a − m ¯ a δ m a ) = − 1 2 ( n a m b ∇ b ℓ a − m ¯ a m b ∇ b m a ) , α := − 1 2 ( n a δ ¯ ℓ a − m ¯ a δ ¯ m a ) = − 1 2 ( n a m ¯ b ∇ b ℓ a − m ¯ a m ¯ b ∇ b m a ) . {\displaystyle {\begin{aligned}\varepsilon &:=-{\tfrac {1}{2}}\left(n^{a}D\ell _{a}-{\bar {m}}^{a}Dm_{a}\right)=-{\tfrac {1}{2}}\left(n^{a}\ell ^{b}\nabla _{b}\ell _{a}-{\bar {m}}^{a}\ell ^{b}\nabla _{b}m_{a}\right)\,,\\[1ex]\gamma &:=-{\tfrac {1}{2}}\left(n^{a}\Delta \ell _{a}-{\bar {m}}^{a}\Delta m_{a}\right)=-{\tfrac {1}{2}}\left(n^{a}n^{b}\nabla _{b}\ell _{a}-{\bar {m}}^{a}n^{b}\nabla _{b}m_{a}\right)\,,\\[1ex]\beta &:=-{\tfrac {1}{2}}\left(n^{a}\delta \ell _{a}-{\bar {m}}^{a}\delta m_{a}\right)=-{\tfrac {1}{2}}\left(n^{a}m^{b}\nabla _{b}\ell _{a}-{\bar {m}}^{a}m^{b}\nabla _{b}m_{a}\right)\,,\\[1ex]\alpha &:=-{\tfrac {1}{2}}\left(n^{a}{\bar {\delta }}\ell _{a}-{\bar {m}}^{a}{\bar {\delta }}m_{a}\right)=-{\tfrac {1}{2}}\left(n^{a}{\bar {m}}^{b}\nabla _{b}\ell _{a}-{\bar {m}}^{a}{\bar {m}}^{b}\nabla _{b}m_{a}\right)\,.\end{aligned}}} Spin coefficients are the primary quantities in NP formalism, with which all other NP quantities (as defined below) could be calculated indirectly using the NP field equations. Thus, NP formalism is sometimes referred to as spin-coefficient formalism as well. The sixteen directional covariant derivatives of tetrad vectors are sometimes called the transportation/propagation equations, [ citation needed ] perhaps because the derivatives are zero when the tetrad vector is parallel propagated or transported in the direction of the derivative operator. These results in this exact notation are given by O'Donnell: [ 5 ] : 57–58(3.220) D ℓ a = ( ε + ε ¯ ) ℓ a − κ ¯ m a − κ m ¯ a , Δ ℓ a = ( γ + γ ¯ ) ℓ a − τ ¯ m a − τ m ¯ a , δ ℓ a = ( α ¯ + β ) ℓ a − ρ ¯ m a − σ m ¯ a , δ ¯ ℓ a = ( α + β ¯ ) ℓ a − σ ¯ m a − ρ m ¯ a ; {\displaystyle {\begin{aligned}D\ell ^{a}&=\left(\varepsilon +{\bar {\varepsilon }}\right)\ell ^{a}-{\bar {\kappa }}m^{a}-\kappa {\bar {m}}^{a}\,,\\[1ex]\Delta \ell ^{a}&=\left(\gamma +{\bar {\gamma }}\right)\ell ^{a}-{\bar {\tau }}m^{a}-\tau {\bar {m}}^{a}\,,\\[1ex]\delta \ell ^{a}&=\left({\bar {\alpha }}+\beta \right)\ell ^{a}-{\bar {\rho }}m^{a}-\sigma {\bar {m}}^{a}\,,\\[1ex]{\bar {\delta }}\ell ^{a}&=\left(\alpha +{\bar {\beta }}\right)\ell ^{a}-{\bar {\sigma }}m^{a}-\rho {\bar {m}}^{a}\,;\end{aligned}}} D n a = π m a + π ¯ m ¯ a − ( ε + ε ¯ ) n a , Δ n a = ν m a + ν ¯ m ¯ a − ( γ + γ ¯ ) n a , δ n a = μ m a + λ ¯ m ¯ a − ( α ¯ + β ) n a , δ ¯ n a = λ m a + μ ¯ m ¯ a − ( α + β ¯ ) n a ; {\displaystyle {\begin{aligned}Dn^{a}&=\pi m^{a}+{\bar {\pi }}{\bar {m}}^{a}-\left(\varepsilon +{\bar {\varepsilon }}\right)n^{a}\,,\\[1ex]\Delta n^{a}&=\nu m^{a}+{\bar {\nu }}{\bar {m}}^{a}-\left(\gamma +{\bar {\gamma }}\right)n^{a}\,,\\[1ex]\delta n^{a}&=\mu m^{a}+{\bar {\lambda }}{\bar {m}}^{a}-\left({\bar {\alpha }}+\beta \right)n^{a}\,,\\[1ex]{\bar {\delta }}n^{a}&=\lambda m^{a}+{\bar {\mu }}{\bar {m}}^{a}-\left(\alpha +{\bar {\beta }}\right)n^{a}\,;\end{aligned}}} D m a = ( ε − ε ¯ ) m a + π ¯ ℓ a − κ n a , Δ m a = ( γ − γ ¯ ) m a + ν ¯ ℓ a − τ n a , δ m a = ( β − α ¯ ) m a + λ ¯ ℓ a − σ n a , δ ¯ m a = ( α − β ¯ ) m a + μ ¯ ℓ a − ρ n a ; {\displaystyle {\begin{aligned}Dm^{a}&=\left(\varepsilon -{\bar {\varepsilon }}\right)m^{a}+{\bar {\pi }}\ell ^{a}-\kappa n^{a}\,,\\[1ex]\Delta m^{a}&=\left(\gamma -{\bar {\gamma }}\right)m^{a}+{\bar {\nu }}\ell ^{a}-\tau n^{a}\,,\\[1ex]\delta m^{a}&=\left(\beta -{\bar {\alpha }}\right)m^{a}+{\bar {\lambda }}\ell ^{a}-\sigma n^{a}\,,\\[1ex]{\bar {\delta }}m^{a}&=\left(\alpha -{\bar {\beta }}\right)m^{a}+{\bar {\mu }}\ell ^{a}-\rho n^{a}\,;\end{aligned}}} D m ¯ a = ( ε ¯ − ε ) m ¯ a + π ℓ a − κ ¯ n a , Δ m ¯ a = ( γ ¯ − γ ) m ¯ a + ν ℓ a − τ ¯ n a , δ m ¯ a = ( α ¯ − β ) m ¯ a + μ ℓ a − ρ ¯ n a , δ ¯ m ¯ a = ( β ¯ − α ) m ¯ a + λ ℓ a − σ ¯ n a . {\displaystyle {\begin{aligned}D{\bar {m}}^{a}&=\left({\bar {\varepsilon }}-\varepsilon \right){\bar {m}}^{a}+\pi \ell ^{a}-{\bar {\kappa }}n^{a}\,,\\[1ex]\Delta {\bar {m}}^{a}&=\left({\bar {\gamma }}-\gamma \right){\bar {m}}^{a}+\nu \ell ^{a}-{\bar {\tau }}n^{a}\,,\\[1ex]\delta {\bar {m}}^{a}&=\left({\bar {\alpha }}-\beta \right){\bar {m}}^{a}+\mu \ell ^{a}-{\bar {\rho }}n^{a}\,,\\[1ex]{\bar {\delta }}{\bar {m}}^{a}&=\left({\bar {\beta }}-\alpha \right){\bar {m}}^{a}+\lambda \ell ^{a}-{\bar {\sigma }}n^{a}\,.\end{aligned}}} The two equations for the covariant derivative of a real null tetrad vector in its own direction indicate whether or not the vector is tangent to a geodesic and if so, whether the geodesic has an affine parameter. A null tangent vector T a {\displaystyle T^{a}} is tangent to an affinely parameterized null geodesic if T b ∇ b T a = 0 {\displaystyle T^{b}\nabla _{b}T^{a}=0} , which is to say if the vector is unchanged by parallel propagation or transportation in its own direction. [ 15 ] : 41(3.3.1) D ℓ a = ( ε + ε ¯ ) ℓ a − κ ¯ m a − κ m ¯ a {\displaystyle D\ell ^{a}=(\varepsilon +{\bar {\varepsilon }})\ell ^{a}-{\bar {\kappa }}m^{a}-\kappa {\bar {m}}^{a}} shows that ℓ a {\displaystyle \ell ^{a}} is tangent to a geodesic if and only if κ = 0 {\displaystyle \kappa =0} , and is tangent to an affinely parameterized geodesic if in addition ( ε + ε ¯ ) = 0 {\displaystyle (\varepsilon +{\bar {\varepsilon }})=0} . Similarly, Δ n a = ν m a + ν ¯ m ¯ a − ( γ + γ ¯ ) n a {\displaystyle \Delta n^{a}=\nu m^{a}+{\bar {\nu }}{\bar {m}}^{a}-(\gamma +{\bar {\gamma }})n^{a}} shows that n a {\displaystyle n^{a}} is geodesic if and only if ν = 0 {\displaystyle \nu =0} , and has affine parameterization when ( γ + γ ¯ ) = 0 {\displaystyle (\gamma +{\bar {\gamma }})=0} . (The complex null tetrad vectors m a = x a + i y a {\displaystyle m^{a}=x^{a}+iy^{a}} and m ¯ a = x a − i y a {\displaystyle {\bar {m}}^{a}=x^{a}-iy^{a}} would have to be separated into the spacelike basis vectors x a {\displaystyle x^{a}} and y a {\displaystyle y^{a}} before asking if either or both of those are tangent to spacelike geodesics.) The metric-compatibility or torsion-freeness of the covariant derivative is recast into the commutators of the directional derivatives , Δ D − D Δ = ( γ + γ ¯ ) D + ( ε + ε ¯ ) Δ − ( τ ¯ + π ) δ − ( τ + π ¯ ) δ ¯ , δ D − D δ = ( α ¯ + β − π ¯ ) D + κ Δ − ( ρ ¯ + ε − ε ¯ ) δ − σ δ ¯ , δ Δ − Δ δ = − ν ¯ D + ( τ − α ¯ − β ) Δ + ( μ − γ + γ ¯ ) δ + λ ¯ δ ¯ , δ ¯ δ − δ δ ¯ = ( μ ¯ − μ ) D + ( ρ ¯ − ρ ) Δ + ( α − β ¯ ) δ − ( α ¯ − β ) δ ¯ , {\displaystyle {\begin{aligned}\Delta D-D\Delta &=\left(\gamma +{\bar {\gamma }}\right)D+\left(\varepsilon +{\bar {\varepsilon }}\right)\Delta -\left({\bar {\tau }}+\pi \right)\delta -\left(\tau +{\bar {\pi }}\right){\bar {\delta }}\,,\\[1ex]\delta D-D\delta &=\left({\bar {\alpha }}+\beta -{\bar {\pi }}\right)D+\kappa \Delta -\left({\bar {\rho }}+\varepsilon -{\bar {\varepsilon }}\right)\delta -\sigma {\bar {\delta }}\,,\\[1ex]\delta \Delta -\Delta \delta &=-{\bar {\nu }}D+\left(\tau -{\bar {\alpha }}-\beta \right)\Delta +\left(\mu -\gamma +{\bar {\gamma }}\right)\delta +{\bar {\lambda }}{\bar {\delta }}\,,\\[1ex]{\bar {\delta }}\delta -\delta {\bar {\delta }}&=\left({\bar {\mu }}-\mu \right)D+\left({\bar {\rho }}-\rho \right)\Delta +\left(\alpha -{\bar {\beta }}\right)\delta -\left({\bar {\alpha }}-\beta \right){\bar {\delta }}\,,\end{aligned}}} which imply that Δ ℓ a − D n a = ( γ + γ ¯ ) ℓ a + ( ε + ε ¯ ) n a − ( τ ¯ + π ) m a − ( τ + π ¯ ) m ¯ a , δ ℓ a − D m a = ( α ¯ + β − π ¯ ) ℓ a + κ n a − ( ρ ¯ + ε − ε ¯ ) m a − σ m ¯ a , δ n a − Δ m a = − ν ¯ ℓ a + ( τ − α ¯ − β ) n a + ( μ − γ + γ ¯ ) m a + λ ¯ m ¯ a , δ ¯ m a − δ m ¯ a = ( μ ¯ − μ ) ℓ a + ( ρ ¯ − ρ ) n a + ( α − β ¯ ) m a − ( α ¯ − β ) m ¯ a . {\displaystyle {\begin{aligned}\Delta \ell ^{a}-Dn^{a}&=\left(\gamma +{\bar {\gamma }}\right)\ell ^{a}+\left(\varepsilon +{\bar {\varepsilon }}\right)n^{a}-\left({\bar {\tau }}+\pi \right)m^{a}-\left(\tau +{\bar {\pi }}\right){\bar {m}}^{a}\,,\\[1ex]\delta \ell ^{a}-Dm^{a}&=\left({\bar {\alpha }}+\beta -{\bar {\pi }}\right)\ell ^{a}+\kappa n^{a}-\left({\bar {\rho }}+\varepsilon -{\bar {\varepsilon }}\right)m^{a}-\sigma {\bar {m}}^{a}\,,\\[1ex]\delta n^{a}-\Delta m^{a}&=-{\bar {\nu }}\ell ^{a}+\left(\tau -{\bar {\alpha }}-\beta \right)n^{a}+\left(\mu -\gamma +{\bar {\gamma }}\right)m^{a}+{\bar {\lambda }}{\bar {m}}^{a}\,,\\[1ex]{\bar {\delta }}m^{a}-\delta {\bar {m}}^{a}&=\left({\bar {\mu }}-\mu \right)\ell ^{a}+\left({\bar {\rho }}-\rho \right)n^{a}+\left(\alpha -{\bar {\beta }}\right)m^{a}-\left({\bar {\alpha }}-\beta \right){\bar {m}}^{a}\,.\end{aligned}}} Note: (i) The above equations can be regarded either as implications of the commutators or combinations of the transportation equations; (ii) In these implied equations, the vectors { ℓ a , n a , m a , m ¯ a } {\displaystyle \{\ell ^{a},n^{a},m^{a},{\bar {m}}^{a}\}} can be replaced by the covectors and the equations still hold. The 10 independent components of the Weyl tensor can be encoded into 5 complex Weyl-NP scalars , Ψ 0 := C a b c d ℓ a m b ℓ c m d , Ψ 1 := C a b c d ℓ a n b ℓ c m d , Ψ 2 := C a b c d ℓ a m b m ¯ c n d , Ψ 3 := C a b c d ℓ a n b m ¯ c n d , Ψ 4 := C a b c d n a m ¯ b n c m ¯ d . {\displaystyle {\begin{aligned}\Psi _{0}&:=C_{abcd}\ell ^{a}m^{b}\ell ^{c}m^{d}\,,&\Psi _{1}&:=C_{abcd}\ell ^{a}n^{b}\ell ^{c}m^{d}\,,\\\Psi _{2}&:=C_{abcd}\ell ^{a}m^{b}{\bar {m}}^{c}n^{d}\,,&\Psi _{3}&:=C_{abcd}\ell ^{a}n^{b}{\bar {m}}^{c}n^{d}\,,\\\Psi _{4}&:=C_{abcd}n^{a}{\bar {m}}^{b}n^{c}{\bar {m}}^{d}\,.\end{aligned}}} The 10 independent components of the Ricci tensor are encoded into 4 real scalars { Φ 00 {\displaystyle \{\Phi _{00}} , Φ 11 {\displaystyle \Phi _{11}} , Φ 22 {\displaystyle \Phi _{22}} , Λ } {\displaystyle \Lambda \}} and 3 complex scalars { Φ 10 , Φ 20 , Φ 21 } {\displaystyle \{\Phi _{10},\Phi _{20},\Phi _{21}\}} (with their complex conjugates), Φ 00 := 1 2 R a b ℓ a ℓ b , Φ 11 := 1 4 R a b ( ℓ a n b + m a m ¯ b ) , Φ 22 := 1 2 R a b n a n b , Λ := 1 24 R ; {\displaystyle {\begin{aligned}\Phi _{00}&:={\tfrac {1}{2}}R_{ab}\ell ^{a}\ell ^{b}\,,&\Phi _{11}&:={\tfrac {1}{4}}R_{ab}\left(\ell ^{a}n^{b}+m^{a}{\bar {m}}^{b}\right),\\[1ex]\Phi _{22}&:={\tfrac {1}{2}}R_{ab}n^{a}n^{b}\,,&\Lambda &:={\tfrac {1}{24}}R\,;\end{aligned}}} Φ 01 := 1 2 R a b ℓ a m b , Φ 10 := 1 2 R a b ℓ a m ¯ b = Φ 01 ¯ , Φ 02 := 1 2 R a b m a m b , Φ 20 := 1 2 R a b m ¯ a m ¯ b = Φ 02 ¯ , Φ 12 := 1 2 R a b m a n b , Φ 21 := 1 2 R a b m ¯ a n b = Φ 12 ¯ . {\displaystyle {\begin{aligned}\Phi _{01}&:={\tfrac {1}{2}}R_{ab}\ell ^{a}m^{b}\,,&\Phi _{10}&:={\tfrac {1}{2}}R_{ab}\ell ^{a}{\bar {m}}^{b}={\overline {\Phi _{01}}}\,,\\\Phi _{02}&:={\tfrac {1}{2}}R_{ab}m^{a}m^{b}\,,&\Phi _{20}&:={\tfrac {1}{2}}R_{ab}{\bar {m}}^{a}{\bar {m}}^{b}={\overline {\Phi _{02}}}\,,\\\Phi _{12}&:={\tfrac {1}{2}}R_{ab}m^{a}n^{b}\,,&\Phi _{21}&:={\tfrac {1}{2}}R_{ab}{\bar {m}}^{a}n^{b}={\overline {\Phi _{12}}}\,.\end{aligned}}} In these definitions, R a b {\displaystyle R_{ab}} could be replaced by its trace-free part Q a b = R a b − 1 4 g a b R {\textstyle Q_{ab}=R_{ab}-{\tfrac {1}{4}}g_{ab}R} [ 13 ] or by the Einstein tensor G a b = R a b − 1 2 g a b R {\textstyle G_{ab}=R_{ab}-{\tfrac {1}{2}}g_{ab}R} because of the normalization relations. Also, Φ 11 {\displaystyle \Phi _{11}} is reduced to Φ 11 = 1 2 R a b ℓ a n b = 1 2 R a b m a m ¯ b {\textstyle \Phi _{11}={\tfrac {1}{2}}R_{ab}\ell ^{a}n^{b}={\tfrac {1}{2}}R_{ab}m^{a}{\bar {m}}^{b}} for electrovacuum ( Λ = 0 {\displaystyle \Lambda =0} ). In a complex null tetrad, Ricci identities give rise to the following NP field equations connecting spin coefficients, Weyl-NP and Ricci-NP scalars (recall that in an orthogonal tetrad, Ricci rotation coefficients would respect Cartan's first and second structure equations ), [ 5 ] [ 13 ] These equations in various notations can be found in several texts. [ 3 ] : 46–47(310(a)-(r)) [ 13 ] : 671–672(E.12) The notation in Frolov and Novikov [ 13 ] is identical. D ρ − δ ¯ κ = ( ρ 2 + σ σ ¯ ) + ( ε + ε ¯ ) ρ − κ ¯ τ − κ ( 3 α + β ¯ − π ) + Φ 00 , D σ − δ κ = ( ρ + ρ ¯ ) σ + ( 3 ε − ε ¯ ) σ − ( τ − π ¯ + α ¯ + 3 β ) κ + Ψ 0 , D τ − Δ κ = ( τ + π ¯ ) ρ + ( τ ¯ + π ) σ + ( ε − ε ¯ ) τ − ( 3 γ + γ ¯ ) κ + Ψ 1 + Φ 01 , D α − δ ¯ ε = ( ρ + ε ¯ − 2 ε ) α + β σ ¯ − β ¯ ε − κ λ − κ ¯ γ + ( ε + ρ ) π + Φ 10 , D β − δ ε = ( α + π ) σ + ( ρ ¯ − ε ¯ ) β − ( μ + γ ) κ − ( α ¯ − π ¯ ) ε + Ψ 1 , D γ − Δ ε = ( τ + π ¯ ) α + ( τ ¯ + π ) β − ( ε + ε ¯ ) γ − ( γ + γ ¯ ) ε + τ π − ν κ + Ψ 2 + Φ 11 − Λ , D λ − δ ¯ π = ( ρ λ + σ ¯ μ ) + π 2 + ( α − β ¯ ) π − ν κ ¯ − ( 3 ε − ε ¯ ) λ + Φ 20 , D μ − δ π = ( ρ ¯ μ + σ λ ) + π π ¯ − ( ε + ε ¯ ) μ − ( α ¯ − β ) π − ν κ + Ψ 2 + 2 Λ , D ν − Δ π = ( π + τ ¯ ) μ + ( π ¯ + τ ) λ + ( γ − γ ¯ ) π − ( 3 ε + ε ¯ ) ν + Ψ 3 + Φ 21 , Δ λ − δ ¯ ν = − ( μ + μ ¯ ) λ − ( 3 γ − γ ¯ ) λ + ( 3 α + β ¯ + π − τ ¯ ) ν − Ψ 4 , δ ρ − δ ¯ σ = ρ ( α ¯ + β ) − σ ( 3 α − β ¯ ) + ( ρ − ρ ¯ ) τ + ( μ − μ ¯ ) κ − Ψ 1 + Φ 01 , δ α − δ ¯ β = ( μ ρ − λ σ ) + α α ¯ + β β ¯ − 2 α β + γ ( ρ − ρ ¯ ) + ε ( μ − μ ¯ ) − Ψ 2 + Φ 11 + Λ , δ λ − δ ¯ μ = ( ρ − ρ ¯ ) ν + ( μ − μ ¯ ) π + ( α + β ¯ ) μ + ( α ¯ − 3 β ) λ − Ψ 3 + Φ 21 , δ ν − Δ μ = ( μ 2 + λ λ ¯ ) + ( γ + γ ¯ ) μ − ν ¯ π + ( τ − 3 β − α ¯ ) ν + Φ 22 , δ γ − Δ β = ( τ − α ¯ − β ) γ + μ τ − σ ν − ε ν ¯ − ( γ − γ ¯ − μ ) β + α λ ¯ + Φ 12 , δ τ − Δ σ = ( μ σ + λ ¯ ρ ) + ( τ + β − α ¯ ) τ − ( 3 γ − γ ¯ ) σ − κ ν ¯ + Φ 02 , Δ ρ − δ ¯ τ = − ( ρ μ ¯ + σ λ ) + ( β ¯ − α − τ ¯ ) τ + ( γ + γ ¯ ) ρ + ν κ − Ψ 2 − 2 Λ , Δ α − δ ¯ γ = ( ρ + ε ) ν − ( τ + β ) λ + ( γ ¯ − μ ¯ ) α + ( β ¯ − τ ¯ ) γ − Ψ 3 . {\displaystyle {\begin{aligned}D\rho -{\bar {\delta }}\kappa &=(\rho ^{2}+\sigma {\bar {\sigma }})+(\varepsilon +{\bar {\varepsilon }})\rho -{\bar {\kappa }}\tau -\kappa (3\alpha +{\bar {\beta }}-\pi )+\Phi _{00}\,,\\[1ex]D\sigma -\delta \kappa &=(\rho +{\bar {\rho }})\sigma +(3\varepsilon -{\bar {\varepsilon }})\sigma -(\tau -{\bar {\pi }}+{\bar {\alpha }}+3\beta )\kappa +\Psi _{0}\,,\\[1ex]D\tau -\Delta \kappa &=(\tau +{\bar {\pi }})\rho +({\bar {\tau }}+\pi )\sigma +(\varepsilon -{\bar {\varepsilon }})\tau -(3\gamma +{\bar {\gamma }})\kappa +\Psi _{1}+\Phi _{01}\,,\\[1ex]D\alpha -{\bar {\delta }}\varepsilon &=(\rho +{\bar {\varepsilon }}-2\varepsilon )\alpha +\beta {\bar {\sigma }}-{\bar {\beta }}\varepsilon -\kappa \lambda -{\bar {\kappa }}\gamma +(\varepsilon +\rho )\pi +\Phi _{10}\,,\\[1ex]D\beta -\delta \varepsilon &=(\alpha +\pi )\sigma +({\bar {\rho }}-{\bar {\varepsilon }})\beta -(\mu +\gamma )\kappa -({\bar {\alpha }}-{\bar {\pi }})\varepsilon +\Psi _{1}\,,\\[1ex]D\gamma -\Delta \varepsilon &=(\tau +{\bar {\pi }})\alpha +({\bar {\tau }}+\pi )\beta -(\varepsilon +{\bar {\varepsilon }})\gamma -(\gamma +{\bar {\gamma }})\varepsilon +\tau \pi -\nu \kappa +\Psi _{2}+\Phi _{11}-\Lambda \,,\\[1ex]D\lambda -{\bar {\delta }}\pi &=(\rho \lambda +{\bar {\sigma }}\mu )+\pi ^{2}+(\alpha -{\bar {\beta }})\pi -\nu {\bar {\kappa }}-(3\varepsilon -{\bar {\varepsilon }})\lambda +\Phi _{20}\,,\\[1ex]D\mu -\delta \pi &=({\bar {\rho }}\mu +\sigma \lambda )+\pi {\bar {\pi }}-(\varepsilon +{\bar {\varepsilon }})\mu -({\bar {\alpha }}-\beta )\pi -\nu \kappa +\Psi _{2}+2\Lambda \,,\\[1ex]D\nu -\Delta \pi &=(\pi +{\bar {\tau }})\mu +({\bar {\pi }}+\tau )\lambda +(\gamma -{\bar {\gamma }})\pi -(3\varepsilon +{\bar {\varepsilon }})\nu +\Psi _{3}+\Phi _{21}\,,\\[1ex]\Delta \lambda -{\bar {\delta }}\nu &=-(\mu +{\bar {\mu }})\lambda -(3\gamma -{\bar {\gamma }})\lambda +(3\alpha +{\bar {\beta }}+\pi -{\bar {\tau }})\nu -\Psi _{4}\,,\\[1ex]\delta \rho -{\bar {\delta }}\sigma &=\rho ({\bar {\alpha }}+\beta )-\sigma (3\alpha -{\bar {\beta }})+(\rho -{\bar {\rho }})\tau +(\mu -{\bar {\mu }})\kappa -\Psi _{1}+\Phi _{01}\,,\\[1ex]\delta \alpha -{\bar {\delta }}\beta &=(\mu \rho -\lambda \sigma )+\alpha {\bar {\alpha }}+\beta {\bar {\beta }}-2\alpha \beta +\gamma (\rho -{\bar {\rho }})+\varepsilon (\mu -{\bar {\mu }})-\Psi _{2}+\Phi _{11}+\Lambda \,,\\[1ex]\delta \lambda -{\bar {\delta }}\mu &=(\rho -{\bar {\rho }})\nu +(\mu -{\bar {\mu }})\pi +(\alpha +{\bar {\beta }})\mu +({\bar {\alpha }}-3\beta )\lambda -\Psi _{3}+\Phi _{21}\,,\\[1ex]\delta \nu -\Delta \mu &=(\mu ^{2}+\lambda {\bar {\lambda }})+(\gamma +{\bar {\gamma }})\mu -{\bar {\nu }}\pi +(\tau -3\beta -{\bar {\alpha }})\nu +\Phi _{22}\,,\\[1ex]\delta \gamma -\Delta \beta &=(\tau -{\bar {\alpha }}-\beta )\gamma +\mu \tau -\sigma \nu -\varepsilon {\bar {\nu }}-(\gamma -{\bar {\gamma }}-\mu )\beta +\alpha {\bar {\lambda }}+\Phi _{12}\,,\\[1ex]\delta \tau -\Delta \sigma &=(\mu \sigma +{\bar {\lambda }}\rho )+(\tau +\beta -{\bar {\alpha }})\tau -(3\gamma -{\bar {\gamma }})\sigma -\kappa {\bar {\nu }}+\Phi _{02}\,,\\[1ex]\Delta \rho -{\bar {\delta }}\tau &=-(\rho {\bar {\mu }}+\sigma \lambda )+({\bar {\beta }}-\alpha -{\bar {\tau }})\tau +(\gamma +{\bar {\gamma }})\rho +\nu \kappa -\Psi _{2}-2\Lambda \,,\\[1ex]\Delta \alpha -{\bar {\delta }}\gamma &=(\rho +\varepsilon )\nu -(\tau +\beta )\lambda +({\bar {\gamma }}-{\bar {\mu }})\alpha +({\bar {\beta }}-{\bar {\tau }})\gamma -\Psi _{3}\,.\end{aligned}}} Also, the Weyl-NP scalars Ψ i {\displaystyle \Psi _{i}} and the Ricci-NP scalars Φ i j {\displaystyle \Phi _{ij}} can be calculated indirectly from the above NP field equations after obtaining the spin coefficients rather than directly using their definitions. The six independent components of the Faraday-Maxwell 2-form (i.e. the electromagnetic field strength tensor ) F a b {\displaystyle F_{ab}} can be encoded into three complex Maxwell-NP scalars [ 12 ] ϕ 0 := F a b ℓ a m b , ϕ 1 := 1 2 F a b ( ℓ a n b + m ¯ a m b ) , ϕ 2 := F a b m ¯ a n b , {\displaystyle \phi _{0}:=F_{ab}\ell ^{a}m^{b}\,,\quad \phi _{1}:={\tfrac {1}{2}}F_{ab}\left(\ell ^{a}n^{b}+{\bar {m}}^{a}m^{b}\right),\quad \phi _{2}:=F_{ab}{\bar {m}}^{a}n^{b}\,,} and therefore the eight real Maxwell equations d F = 0 {\displaystyle d\mathbf {F} =0} and d ⋆ F = 0 {\displaystyle d^{\star }\mathbf {F} =0} (as F = d A {\displaystyle \mathbf {F} =dA} ) can be transformed into four complex equations, D ϕ 1 − δ ¯ ϕ 0 = ( π − 2 α ) ϕ 0 + 2 ρ ϕ 1 − κ ϕ 2 , D ϕ 2 − δ ¯ ϕ 1 = − λ ϕ 0 + 2 π ϕ 1 + ( ρ − 2 ε ) ϕ 2 , Δ ϕ 0 − δ ϕ 1 = ( 2 γ − μ ) ϕ 0 − 2 τ ϕ 1 + σ ϕ 2 , Δ ϕ 1 − δ ϕ 2 = ν ϕ 0 − 2 μ ϕ 1 + ( 2 β − τ ) ϕ 2 , {\displaystyle {\begin{aligned}D\phi _{1}-{\bar {\delta }}\phi _{0}&=(\pi -2\alpha )\phi _{0}+2\rho \phi _{1}-\kappa \phi _{2}\,,\\[1ex]D\phi _{2}-{\bar {\delta }}\phi _{1}&=-\lambda \phi _{0}+2\pi \phi _{1}+(\rho -2\varepsilon )\phi _{2}\,,\\[1ex]\Delta \phi _{0}-\delta \phi _{1}&=(2\gamma -\mu )\phi _{0}-2\tau \phi _{1}+\sigma \phi _{2}\,,\\[1ex]\Delta \phi _{1}-\delta \phi _{2}&=\nu \phi _{0}-2\mu \phi _{1}+(2\beta -\tau )\phi _{2}\,,\end{aligned}}} with the Ricci-NP scalars Φ i j {\displaystyle \Phi _{ij}} related to Maxwell scalars by [ 12 ] Φ i j = 2 ϕ i ϕ j ¯ , ( i , j ∈ { 0 , 1 , 2 } ) . {\displaystyle \Phi _{ij}=\,2\,\phi _{i}\,{\overline {\phi _{j}}}\,,\quad (i,j\in \{0,1,2\})\,.} It is worthwhile to point out that, the supplementary equation Φ i j = 2 ϕ i ϕ j ¯ {\displaystyle \Phi _{ij}=2\,\phi _{i}\,{\overline {\phi _{j}}}} is only valid for electromagnetic fields; for example, in the case of Yang-Mills fields there will be Φ i j = Tr ( ϝ i ϝ ¯ j ) {\displaystyle \Phi _{ij}=\,{\text{Tr}}\,(\digamma _{i}\,{\bar {\digamma }}_{j})} where ϝ i ( i ∈ { 0 , 1 , 2 } ) {\displaystyle \digamma _{i}(i\in \{0,1,2\})} are Yang-Mills-NP scalars. [ 16 ] To sum up, the aforementioned transportation equations, NP field equations and Maxwell-NP equations together constitute the Einstein-Maxwell equations in Newman–Penrose formalism. The Weyl scalar Ψ 4 {\displaystyle \Psi _{4}} was defined by Newman & Penrose as Ψ 4 = − C α β γ δ n α m ¯ β n γ m ¯ δ {\displaystyle \Psi _{4}=-C_{\alpha \beta \gamma \delta }n^{\alpha }{\bar {m}}^{\beta }n^{\gamma }{\bar {m}}^{\delta }} (note, however, that the overall sign is arbitrary , and that Newman & Penrose worked with a "timelike" metric signature of ( + , − , − , − ) {\displaystyle (+,-,-,-)} ). In empty space, the Einstein Field Equations reduce to R α β = 0 {\displaystyle R_{\alpha \beta }=0} . From the definition of the Weyl tensor, we see that this means that it equals the Riemann tensor , C α β γ δ = R α β γ δ {\displaystyle C_{\alpha \beta \gamma \delta }=R_{\alpha \beta \gamma \delta }} . We can make the standard choice for the tetrad at infinity: ℓ μ = 1 2 ( t ^ + r ^ ) , {\displaystyle \ell ^{\mu }={\frac {1}{\sqrt {2}}}\left({\hat {t}}+{\hat {r}}\right)\ ,} n μ = 1 2 ( t ^ − r ^ ) , {\displaystyle n^{\mu }={\frac {1}{\sqrt {2}}}\left({\hat {t}}-{\hat {r}}\right)\ ,} m μ = 1 2 ( θ ^ + i ϕ ^ ) . {\displaystyle m^{\mu }={\frac {1}{\sqrt {2}}}\left({\hat {\theta }}+i{\hat {\phi }}\right)\ .} In transverse-traceless gauge, a simple calculation shows that linearized gravitational waves are related to components of the Riemann tensor as 1 4 ( h ¨ θ ^ θ ^ − h ¨ ϕ ^ ϕ ^ ) = − R t ^ θ ^ t ^ θ ^ = − R t ^ ϕ ^ r ^ ϕ ^ = − R r ^ θ ^ r ^ θ ^ = R t ^ ϕ ^ t ^ ϕ ^ = R t ^ θ ^ r ^ θ ^ = R r ^ ϕ ^ r ^ ϕ ^ , {\displaystyle {\tfrac {1}{4}}\left({\ddot {h}}_{{\hat {\theta }}{\hat {\theta }}}-{\ddot {h}}_{{\hat {\phi }}{\hat {\phi }}}\right)=-R_{{\hat {t}}{\hat {\theta }}{\hat {t}}{\hat {\theta }}}=-R_{{\hat {t}}{\hat {\phi }}{\hat {r}}{\hat {\phi }}}=-R_{{\hat {r}}{\hat {\theta }}{\hat {r}}{\hat {\theta }}}=R_{{\hat {t}}{\hat {\phi }}{\hat {t}}{\hat {\phi }}}=R_{{\hat {t}}{\hat {\theta }}{\hat {r}}{\hat {\theta }}}=R_{{\hat {r}}{\hat {\phi }}{\hat {r}}{\hat {\phi }}}\ ,} 1 2 h ¨ θ ^ ϕ ^ = − R t ^ θ ^ t ^ ϕ ^ = − R r ^ θ ^ r ^ ϕ ^ = R t ^ θ ^ r ^ ϕ ^ = R r ^ θ ^ t ^ ϕ ^ , {\displaystyle {\tfrac {1}{2}}{\ddot {h}}_{{\hat {\theta }}{\hat {\phi }}}=-R_{{\hat {t}}{\hat {\theta }}{\hat {t}}{\hat {\phi }}}=-R_{{\hat {r}}{\hat {\theta }}{\hat {r}}{\hat {\phi }}}=R_{{\hat {t}}{\hat {\theta }}{\hat {r}}{\hat {\phi }}}=R_{{\hat {r}}{\hat {\theta }}{\hat {t}}{\hat {\phi }}}\ ,} assuming propagation in the r ^ {\displaystyle {\hat {r}}} direction. Combining these, and using the definition of Ψ 4 {\displaystyle \Psi _{4}} above, we can write Ψ 4 = 1 2 ( h ¨ θ ^ θ ^ − h ¨ ϕ ^ ϕ ^ ) + i h ¨ θ ^ ϕ ^ = − h ¨ + + i h ¨ × . {\displaystyle \Psi _{4}={\tfrac {1}{2}}\left({\ddot {h}}_{{\hat {\theta }}{\hat {\theta }}}-{\ddot {h}}_{{\hat {\phi }}{\hat {\phi }}}\right)+i{\ddot {h}}_{{\hat {\theta }}{\hat {\phi }}}=-{\ddot {h}}_{+}+i{\ddot {h}}_{\times }\,.} Far from a source, in nearly flat space, the fields h + {\displaystyle h_{+}} and h × {\displaystyle h_{\times }} encode everything about gravitational radiation propagating in a given direction. Thus, we see that Ψ 4 {\displaystyle \Psi _{4}} encodes in a single complex field everything about (outgoing) gravitational waves. Using the wave-generation formalism summarised by Thorne, [ 17 ] we can write the radiation field quite compactly in terms of the mass multipole , current multipole , and spin-weighted spherical harmonics : Ψ 4 ( t , r , θ , ϕ ) = − 1 r 2 ∑ ℓ = 2 ∞ ∑ m = − ℓ ℓ [ ( ℓ + 2 ) I ℓ m ( t − r ) − i ( ℓ + 2 ) S ℓ m ( t − r ) ] − 2 Y ℓ m ( θ , ϕ ) . {\displaystyle \Psi _{4}(t,r,\theta ,\phi )=-{\frac {1}{r{\sqrt {2}}}}\sum _{\ell =2}^{\infty }\sum _{m=-\ell }^{\ell }\left[{}^{(\ell +2)}I^{\ell m}(t-r)-i\ {}^{(\ell +2)}S^{\ell m}(t-r)\right]{}_{-2}Y_{\ell m}(\theta ,\phi )\ .} Here, prefixed superscripts indicate time derivatives. That is, we define ( ℓ ) G ( t ) = ( d d t ) ℓ G ( t ) . {\displaystyle {}^{(\ell )}G(t)=\left({\frac {d}{dt}}\right)^{\ell }G(t)\ .} The components I ℓ m {\displaystyle I^{\ell m}} and S ℓ m {\displaystyle S^{\ell m}} are the mass and current multipoles, respectively. − 2 Y ℓ m {\displaystyle {}_{-2}Y_{\ell m}} is the spin-weight −2 spherical harmonic.
https://en.wikipedia.org/wiki/Newman–Penrose_formalism
Michael D. Alder [ 1 ] is an Australian mathematician, formerly an assistant professor at the University of Western Australia . [ 2 ] Alder is known for his popular writing, such as sardonic articles about the lack of basic arithmetic skills in young adults. [ 3 ] Alder received a B.Sc. in physics from Imperial College , then a PhD in algebraic topology from the University of Liverpool , and an M. Eng. Sc. from the University of Western Australia. [ 4 ] He was an assistant professor at the University of Western Australia until 2011. [ 5 ] Newton's flaming laser sword (also known as Alder's razor ) is a philosophical razor devised by Alder and discussed in an essay in the May/June 2004 issue of Philosophy Now . [ 6 ] The principle, which addresses the differing views of scientists and philosophers on epistemology and knowledge, was summarized by Alder as follows: [ 6 ] In its weakest form it says that we should not dispute propositions unless they can be shown by precise logic and/or mathematics to have observable consequences. In its strongest form it demands a list of observable consequences and a formal demonstration that they are indeed consequences of the proposition claimed. The razor is humorously named after Isaac Newton , as it is inspired by Newtonian thought and is called a "flaming laser sword", because it is "much sharper and more dangerous than Occam's razor ". [ 6 ] Alder writes that the average scientist does not hold philosophy in high regard, considering it "somewhere between sociology and literary criticism ". [ 6 ] He has strongly criticized what he sees as the disproportionate influence of Greek philosophy —especially Platonism —in modern philosophy . He contrasts the scientist's Popperian approach to the philosopher's Platonic approach , which he describes as pure reason . He illustrates this with the example of the irresistible force paradox , amongst others. According to Alder, the scientist's answer to the paradox "What happens when an irresistible force is exerted on an immovable object" is that the premise of the question is flawed: either the object is moved (and thus the object is movable), or it is not (thus the force is resistible): [ 6 ] Eventually I concluded that language was bigger than the universe, that it was possible to talk about things in the same sentence which could not both be found in the real world. The real world might conceivably contain some object which had never so far been moved, and it might contain a force that had never successfully been resisted, but the question of whether the object was really immovable could only be known if all possible forces had been tried on it and left it unmoved. So the matter could be resolved by trying out the hitherto irresistible force on the hitherto immovable object to see what happened. Either the object would move or it wouldn't, which would tell us only that either the hitherto immovable object was not in fact immovable, or that the hitherto irresistible force was in fact resistible. That is, to the scientist, the question can be solved by experiment. Alder admits, however, that "While the Newtonian insistence on ensuring that any statement is testable by observation... undoubtedly cuts out the crap, it also seems to cut out almost everything else as well." [ 6 ]
https://en.wikipedia.org/wiki/Newton's_Flaming_Laser_Sword
In mathematics , Newton's identities , also known as the Girard–Newton formulae , give relations between two types of symmetric polynomials , namely between power sums and elementary symmetric polynomials . Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k -th powers of all roots of P (counted with their multiplicity) in terms of the coefficients of P , without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard . They have applications in many areas of mathematics, including Galois theory , invariant theory , group theory , combinatorics , as well as further applications outside mathematics, including general relativity . Let x 1 , ..., x n be variables, denote for k ≥ 1 by p k ( x 1 , ..., x n ) the k -th power sum : and for k ≥ 0 denote by e k ( x 1 , ..., x n ) the elementary symmetric polynomial (that is, the sum of all distinct products of k distinct variables), so Then Newton's identities can be stated as valid for all n ≥ k ≥ 1 . Also, one has for all k > n ≥ 1 . Concretely, one gets for the first few values of k : The form and validity of these equations do not depend on the number n of variables (although the point where the left-hand side becomes 0 does, namely after the n -th identity), which makes it possible to state them as identities in the ring of symmetric functions . In that ring one has and so on; here the left-hand sides never become zero. These equations allow to recursively express the e i in terms of the p k ; to be able to do the inverse, one may rewrite them as In general, we have valid for all n ≥ k ≥ 1. Also, one has for all k > n ≥ 1. The polynomial with roots x i may be expanded as where the coefficients e k ( x 1 , … , x n ) {\displaystyle e_{k}(x_{1},\ldots ,x_{n})} are the symmetric polynomials defined above. Given the power sums of the roots the coefficients of the polynomial with roots x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} may be expressed recursively in terms of the power sums as Formulating polynomials in this way is useful in using the method of Delves and Lyness [ 1 ] to find the zeros of an analytic function. When the polynomial above is the characteristic polynomial of a matrix A {\displaystyle \mathbf {A} } (in particular when A {\displaystyle \mathbf {A} } is the companion matrix of the polynomial), the roots x i {\displaystyle x_{i}} are the eigenvalues of the matrix, counted with their algebraic multiplicity. For any positive integer k {\displaystyle k} , the matrix A k {\displaystyle \mathbf {A} ^{k}} has as eigenvalues the powers x i k {\displaystyle x_{i}^{k}} , and each eigenvalue x i {\displaystyle x_{i}} of A {\displaystyle \mathbf {A} } contributes its multiplicity to that of the eigenvalue x i k {\displaystyle x_{i}^{k}} of A k {\displaystyle \mathbf {A} ^{k}} . Then the coefficients of the characteristic polynomial of A k {\displaystyle \mathbf {A} ^{k}} are given by the elementary symmetric polynomials in those powers x i k {\displaystyle x_{i}^{k}} . In particular, the sum of the x i k {\displaystyle x_{i}^{k}} , which is the k {\displaystyle k} -th power sum p k {\displaystyle p_{k}} of the roots of the characteristic polynomial of A {\displaystyle \mathbf {A} } , is given by its trace : p k = tr ⁡ ( A k ) . {\displaystyle p_{k}=\operatorname {tr} (\mathbf {A} ^{k})\,.} The Newton identities now relate the traces of the powers A k {\displaystyle \mathbf {A} ^{k}} to the coefficients of the characteristic polynomial of A {\displaystyle \mathbf {A} } . Using them in reverse to express the elementary symmetric polynomials in terms of the power sums, they can be used to find the characteristic polynomial by computing only the powers A k {\displaystyle \mathbf {A} ^{k}} and their traces. This computation requires computing the traces of matrix powers A k {\displaystyle \mathbf {A} ^{k}} and solving a triangular system of equations. Both can be done in complexity class NC (solving a triangular system can be done by divide-and-conquer). Therefore, characteristic polynomial of a matrix can be computed in NC. By the Cayley–Hamilton theorem , every matrix satisfies its characteristic polynomial, and a simple transformation allows to find the adjugate matrix in NC. Rearranging the computations into an efficient form leads to the Faddeev–LeVerrier algorithm (1840), a fast parallel implementation of it is due to L. Csanky (1976). Its disadvantage is that it requires division by integers, so in general the field should have characteristic 0. For a given n , the elementary symmetric polynomials e k ( x 1 ,..., x n ) for k = 1,..., n form an algebraic basis for the space of symmetric polynomials in x 1 ,.... x n : every polynomial expression in the x i that is invariant under all permutations of those variables is given by a polynomial expression in those elementary symmetric polynomials, and this expression is unique up to equivalence of polynomial expressions. This is a general fact known as the fundamental theorem of symmetric polynomials , and Newton's identities provide explicit formulae in the case of power sum symmetric polynomials. Applied to the monic polynomial t n + ∑ k = 1 n ( − 1 ) k a k t n − k {\textstyle t^{n}+\sum _{k=1}^{n}(-1)^{k}a_{k}t^{n-k}} with all coefficients a k considered as free parameters, this means that every symmetric polynomial expression S ( x 1 ,..., x n ) in its roots can be expressed instead as a polynomial expression P ( a 1 ,..., a n ) in terms of its coefficients only, in other words without requiring knowledge of the roots. This fact also follows from general considerations in Galois theory (one views the a k as elements of a base field with roots in an extension field whose Galois group permutes them according to the full symmetric group, and the field fixed under all elements of the Galois group is the base field). The Newton identities also permit expressing the elementary symmetric polynomials in terms of the power sum symmetric polynomials, showing that any symmetric polynomial can also be expressed in the power sums. In fact the first n power sums also form an algebraic basis for the space of symmetric polynomials. There are a number of (families of) identities that, while they should be distinguished from Newton's identities, are very closely related to them. Denoting by h k the complete homogeneous symmetric polynomial (that is, the sum of all monomials of degree k ), the power sum polynomials also satisfy identities similar to Newton's identities, but not involving any minus signs. Expressed as identities of in the ring of symmetric functions , they read valid for all n ≥ k ≥ 1. Contrary to Newton's identities, the left-hand sides do not become zero for large k , and the right-hand sides contain ever more non-zero terms. For the first few values of k , one has These relations can be justified by an argument analogous to the one by comparing coefficients in power series given above, based in this case on the generating function identity Proofs of Newton's identities, like these given below, cannot be easily adapted to prove these variants of those identities. As mentioned, Newton's identities can be used to recursively express elementary symmetric polynomials in terms of power sums. Doing so requires the introduction of integer denominators, so it can be done in the ring Λ Q of symmetric functions with rational coefficients: and so forth. [ 2 ] The general formula can be conveniently expressed as where the B n is the complete exponential Bell polynomial . This expression also leads to the following identity for generating functions: Applied to a monic polynomial, these formulae express the coefficients in terms of the power sums of the roots: replace each e i by a i and each p k by s k . The analogous relations involving complete homogeneous symmetric polynomials can be similarly developed, giving equations and so forth, in which there are only plus signs. In terms of the complete Bell polynomial, These expressions correspond exactly to the cycle index polynomials of the symmetric groups , if one interprets the power sums p i as indeterminates: the coefficient in the expression for h k of any monomial p 1 m 1 p 2 m 2 ... p l m l is equal to the fraction of all permutations of k that have m 1 fixed points, m 2 cycles of length 2, ..., and m l cycles of length l . Explicitly, this coefficient can be written as 1 / N {\displaystyle 1/N} where N = ∏ i = 1 l ( m i ! i m i ) {\textstyle N=\prod _{i=1}^{l}(m_{i}!\,i^{m_{i}})} ; this N is the number permutations commuting with any given permutation π of the given cycle type. The expressions for the elementary symmetric functions have coefficients with the same absolute value, but a sign equal to the sign of π , namely (−1) m 2 + m 4 +... . It can be proved by considering the following inductive step: By analogy with the derivation of the generating function of the e n {\displaystyle e_{n}} , we can also obtain the generating function of the h n {\displaystyle h_{n}} , in terms of the power sums, as: This generating function is thus the plethystic exponential of p 1 t = ( x 1 + ⋯ + x n ) t {\displaystyle p_{1}t=(x_{1}+\cdots +x_{n})t} . One may also use Newton's identities to express power sums in terms of elementary symmetric polynomials, which does not introduce denominators: The first four formulas were obtained by Albert Girard in 1629 (thus before Newton). [ 3 ] The general formula (for all positive integers m ) is: This can be conveniently stated in terms of ordinary Bell polynomials as or equivalently as the generating function : [ 4 ] which is analogous to the Bell polynomial exponential generating function given in the previous subsection . The multiple summation formula above can be proved by considering the following inductive step: Finally one may use the variant identities involving complete homogeneous symmetric polynomials similarly to express power sums in term of them: and so on. Apart from the replacement of each e i by the corresponding h i , the only change with respect to the previous family of identities is in the signs of the terms, which in this case depend just on the number of factors present: the sign of the monomial ∏ i = 1 l h i m i {\textstyle \prod _{i=1}^{l}h_{i}^{m_{i}}} is −(−1) m 1 + m 2 + m 3 +... . In particular the above description of the absolute value of the coefficients applies here as well. The general formula (for all non-negative integers m ) is: One can obtain explicit formulas for the above expressions in the form of determinants, by considering the first n of Newton's identities (or it counterparts for the complete homogeneous polynomials) as linear equations in which the elementary symmetric functions are known and the power sums are unknowns (or vice versa), and apply Cramer's rule to find the solution for the final unknown. For instance taking Newton's identities in the form we consider p 1 , − p 2 , p 3 , … , ( − 1 ) n p n − 1 {\displaystyle p_{1},-p_{2},p_{3},\ldots ,(-1)^{n}p_{n-1}} and p n {\displaystyle p_{n}} as unknowns, and solve for the final one, giving Solving for e n {\displaystyle e_{n}} instead of for p n {\displaystyle p_{n}} is similar, as the analogous computations for the complete homogeneous symmetric polynomials; in each case the details are slightly messier than the final results, which are (Macdonald 1979, p. 20): Note that the use of determinants makes that the formula for h n {\displaystyle h_{n}} has additional minus signs compared to the one for e n {\displaystyle e_{n}} , while the situation for the expanded form given earlier is opposite. As remarked in (Littlewood 1950, p. 84) one can alternatively obtain the formula for h n {\displaystyle h_{n}} by taking the permanent of the matrix for e n {\displaystyle e_{n}} instead of the determinant, and more generally an expression for any Schur polynomial can be obtained by taking the corresponding immanant of this matrix. Each of Newton's identities can easily be checked by elementary algebra; however, their validity in general needs a proof. Here are some possible derivations. One can obtain the k -th Newton identity in k variables by substitution into as follows. Substituting x j for t gives Summing over all j gives where the terms for i = 0 were taken out of the sum because p 0 is (usually) not defined. This equation immediately gives the k -th Newton identity in k variables. Since this is an identity of symmetric polynomials (homogeneous) of degree k , its validity for any number of variables follows from its validity for k variables. Concretely, the identities in n < k variables can be deduced by setting k − n variables to zero. The k -th Newton identity in n > k variables contains more terms on both sides of the equation than the one in k variables, but its validity will be assured if the coefficients of any monomial match. Because no individual monomial involves more than k of the variables, the monomial will survive the substitution of zero for some set of n − k (other) variables, after which the equality of coefficients is one that arises in the k -th Newton identity in k (suitably chosen) variables. Another derivation can be obtained by computations in the ring of formal power series R [[ t ]], where R is Z [ x 1 ,..., x n ], the ring of polynomials in n variables x 1 ,..., x n over the integers. Starting again from the basic relation and "reversing the polynomials" by substituting 1/ t for t and then multiplying both sides by t n to remove negative powers of t , gives (the above computation should be performed in the field of fractions of R [[ t ]]; alternatively, the identity can be obtained simply by evaluating the product on the left side) Swapping sides and expressing the a i as the elementary symmetric polynomials they stand for gives the identity One formally differentiates both sides with respect to t , and then (for convenience) multiplies by t , to obtain where the polynomial on the right hand side was first rewritten as a rational function in order to be able to factor out a product out of the summation, then the fraction in the summand was developed as a series in t , using the formula and finally the coefficient of each t j was collected, giving a power sum. (The series in t is a formal power series, but may alternatively be thought of as a series expansion for t sufficiently close to 0, for those more comfortable with that; in fact one is not interested in the function here, but only in the coefficients of the series.) Comparing coefficients of t k on both sides one obtains which gives the k -th Newton identity. The following derivation, given essentially in (Mead, 1992), is formulated in the ring of symmetric functions for clarity (all identities are independent of the number of variables). Fix some k > 0, and define the symmetric function r ( i ) for 2 ≤ i ≤ k as the sum of all distinct monomials of degree k obtained by multiplying one variable raised to the power i with k − i distinct other variables (this is the monomial symmetric function m γ where γ is a hook shape ( i ,1,1,...,1)). In particular r ( k ) = p k ; for r (1) the description would amount to that of e k , but this case was excluded since here monomials no longer have any distinguished variable. All products p i e k − i can be expressed in terms of the r ( j ) with the first and last case being somewhat special. One has since each product of terms on the left involving distinct variables contributes to r ( i ), while those where the variable from p i already occurs among the variables of the term from e k − i contributes to r ( i + 1), and all terms on the right are so obtained exactly once. For i = k one multiplies by e 0 = 1, giving trivially Finally the product p 1 e k −1 for i = 1 gives contributions to r ( i + 1) = r (2) like for other values i < k , but the remaining contributions produce k times each monomial of e k , since any one of the variables may come from the factor p 1 ; thus The k -th Newton identity is now obtained by taking the alternating sum of these equations, in which all terms of the form r ( i ) cancel out. A short combinatorial proof of Newton's identities was given by Doron Zeilberger in 1984. [ 5 ]
https://en.wikipedia.org/wiki/Newton's_identities
In mathematics , the Newton inequalities are named after Isaac Newton . Suppose a 1 , a 2 , ..., a n are non-negative real numbers and let e k {\displaystyle e_{k}} denote the k th elementary symmetric polynomial in a 1 , a 2 , ..., a n . Then the elementary symmetric means , given by satisfy the inequality Equality holds if and only if all the numbers a i are equal. It can be seen that S 1 is the arithmetic mean , and S n is the n -th power of the geometric mean .
https://en.wikipedia.org/wiki/Newton's_inequalities
In the study of heat transfer , Newton's law of cooling is a physical law which states that the rate of heat loss of a body is directly proportional to the difference in the temperatures between the body and its environment. The law is frequently qualified to include the condition that the temperature difference is small and the nature of heat transfer mechanism remains the same. As such, it is equivalent to a statement that the heat transfer coefficient , which mediates between heat losses and temperature differences, is a constant. In heat conduction , Newton's law is generally followed as a consequence of Fourier's law . The thermal conductivity of most materials is only weakly dependent on temperature, so the constant heat transfer coefficient condition is generally met. In convective heat transfer , Newton's Law is followed for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference. In the case of heat transfer by thermal radiation , Newton's law of cooling holds only for very small temperature differences. When stated in terms of temperature differences, Newton's law (with several further simplifying assumptions, such as a low Biot number and a temperature-independent heat capacity ) results in a simple differential equation expressing temperature-difference as a function of time . The solution to that equation describes an exponential decrease of temperature-difference over time. This characteristic decay of the temperature-difference is also associated with Newton's law of cooling. Isaac Newton published his work on cooling anonymously in 1701 as "Scala graduum Caloris. Calorum Descriptiones & signa." in Philosophical Transactions . [ 1 ] It was the first heat transfer formulation and serves as the formal basis of convective heat transfer . [ 2 ] Newton did not originally state his law in the above form in 1701. Rather, using today's terms, Newton noted after some mathematical manipulation that the rate of temperature change of a body is proportional to the difference in temperatures between the body and its surroundings. This final simplest version of the law, given by Newton himself, was partly due to confusion in Newton's time between the concepts of heat and temperature, which would not be fully disentangled until much later. [ 3 ] In 2020, Maruyama and Moriya repeated Newton's experiments with modern apparatus, and they applied modern data reduction techniques. [ 4 ] In particular, these investigators took account of thermal radiation at high temperatures (as for the molten metals Newton used), and they accounted for buoyancy effects on the air flow. By comparison to Newton's original data, they concluded that his measurements (from 1692 to 1693) had been "quite accurate". [ 4 ] Convection cooling is sometimes said to be governed by "Newton's law of cooling." When the heat transfer coefficient is independent, or relatively independent, of the temperature difference between object and environment, Newton's law is followed. The law holds well for forced air and pumped liquid cooling, where the fluid velocity does not rise with increasing temperature difference. Newton's law is most closely obeyed in purely conduction-type cooling. However, the heat transfer coefficient is a function of the temperature difference in natural convective (buoyancy driven) heat transfer. In that case, Newton's law only approximates the result when the temperature difference is relatively small. Newton himself realized this limitation. A correction to Newton's law concerning convection for larger temperature differentials by including an exponent, was made in 1817 by Dulong and Petit . [ 5 ] (These men are better-known for their formulation of the Dulong–Petit law concerning the molar specific heat capacity of a crystal.) Another situation that does not obey Newton's law is radiative heat transfer . Radiative cooling is better described by the Stefan–Boltzmann law in which the heat transfer rate varies as the difference in the 4th powers of the absolute temperatures of the object and of its environment. The statement of Newton's law used in the heat transfer literature puts into mathematics the idea that the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings . For a temperature-independent heat transfer coefficient, the statement is: q = h ( T ( t ) − T env ) = h Δ T ( t ) , {\displaystyle q=h\left(T(t)-T_{\text{env}}\right)=h\,\Delta T(t),} where In global parameters by integrating on the surface area the heat flux, it can be also stated as: Q ˙ = ∮ A h ( T ( t ) − T env ) d A = ∮ A h Δ T ( t ) d A , {\displaystyle {\dot {Q}}=\oint _{A}h\left(T(t)-T_{\text{env}}\right)dA=\oint _{A}h\,\Delta T(t)dA,} where If the heat transfer coefficient and the temperature difference are uniform along the heat transfer surface, the above formula simplifies to: Q ˙ = h A ( T ( t ) − T env ) = h A Δ T ( t ) {\displaystyle {\dot {Q}}=hA\left(T(t)-T_{\text{env}}\right)=hA\,\Delta T(t)} . The heat transfer coefficient h depends upon physical properties of the fluid and the physical situation in which convection occurs. Therefore, a single usable heat transfer coefficient (one that does not vary significantly across the temperature-difference ranges covered during cooling and heating) must be derived or found experimentally for every system that is to be analyzed. Formulas and correlations are available in many references to calculate heat transfer coefficients for typical configurations and fluids. For laminar flows, the heat transfer coefficient is usually smaller than in turbulent flows because turbulent flows have strong mixing within the boundary layer on the heat transfer surface. [ 6 ] Note the heat transfer coefficient changes in a system when a transition from laminar to turbulent flow occurs. The Biot number, a dimensionless quantity, is defined for a body as Bi = h L C k b , {\displaystyle {\text{Bi}}={\frac {hL_{\rm {C}}}{k_{\rm {b}}}},} where The physical significance of Biot number can be understood by imagining the heat flow from a hot metal sphere suddenly immersed in a pool to the surrounding fluid. The heat flow experiences two resistances: the first outside the surface of the sphere, and the second within the solid metal (which is influenced by both the size and composition of the sphere). The ratio of these resistances is the dimensionless Biot number. If the thermal resistance at the fluid/sphere interface exceeds that thermal resistance offered by the interior of the metal sphere, the Biot number will be less than one. For systems where it is much less than one, the interior of the sphere may be presumed always to have the same temperature, although this temperature may be changing, as heat passes into the sphere from the surface. The equation to describe this change in (relatively uniform) temperature inside the object, is the simple exponential one described in Newton's law of cooling expressed in terms of temperature difference (see below). In contrast, the metal sphere may be large, causing the characteristic length to increase to the point that the Biot number is larger than one. In this case, temperature gradients within the sphere become important, even though the sphere material is a good conductor. Equivalently, if the sphere is made of a thermally insulating (poorly conductive) material, such as wood or styrofoam, the interior resistance to heat flow will exceed that at the fluid/sphere boundary, even with a much smaller sphere. In this case, again, the Biot number will be greater than one. Values of the Biot number smaller than 0.1 imply that the heat conduction inside the body is much faster than the heat convection away from its surface, and temperature gradients are negligible inside of it. This can indicate the applicability (or inapplicability) of certain methods of solving transient heat transfer problems. For example, a Biot number less than 0.1 typically indicates less than 5% error will be present when assuming a lumped-capacitance model of transient heat transfer (also called lumped system analysis). [ 7 ] Typically, this type of analysis leads to simple exponential heating or cooling behavior ("Newtonian" cooling or heating) since the internal energy of the body is directly proportional to its temperature, which in turn determines the rate of heat transfer into or out of it. This leads to a simple first-order differential equation which describes heat transfer in these systems. Having a Biot number smaller than 0.1 labels a substance as "thermally thin," and temperature can be assumed to be constant throughout the material's volume. The opposite is also true: A Biot number greater than 0.1 (a "thermally thick" substance) indicates that one cannot make this assumption, and more complicated heat transfer equations for "transient heat conduction" will be required to describe the time-varying and non-spatially-uniform temperature field within the material body. Analytic methods for handling these problems, which may exist for simple geometric shapes and uniform material thermal conductivity , are described in the article on the heat equation . Simple solutions for transient cooling of an object may be obtained when the internal thermal resistance within the object is small in comparison to the resistance to heat transfer away from the object's surface (by external conduction or convection), which is the condition for which the Biot number is less than about 0.1. This condition allows the presumption of a single, approximately uniform temperature inside the body, which varies in time but not with position. (Otherwise the body would have many different temperatures inside it at any one time.) This single temperature will generally change exponentially as time progresses (see below). The condition of low Biot number leads to the so-called lumped capacitance model . In this model, the internal energy (the amount of thermal energy in the body) is calculated by assuming a constant heat capacity . In that case, the internal energy of the body is a linear function of the body's single internal temperature. The lumped capacitance solution that follows assumes a constant heat transfer coefficient, as would be the case in forced convection. For free convection, the lumped capacitance model can be solved with a heat transfer coefficient that varies with temperature difference. [ 8 ] A body treated as a lumped capacitance object, with a total internal energy of U {\displaystyle U} (in joules), is characterized by a single uniform internal temperature, T ( t ) {\displaystyle T(t)} . The heat capacitance, C {\displaystyle C} , of the body is C = d U / d T {\displaystyle C=dU/dT} (in J/K), for the case of an incompressible material. The internal energy may be written in terms of the temperature of the body, the heat capacitance (taken to be independent of temperature), and a reference temperature at which the internal energy is zero: U = C ( T − T ref ) {\displaystyle U=C(T-T_{\text{ref}})} . Differentiating U {\displaystyle U} with respect to time gives: d U d t = C d T d t . {\displaystyle {\frac {dU}{dt}}=C\,{\frac {dT}{dt}}.} Applying the first law of thermodynamics to the lumped object gives d U d t = − Q ˙ {\textstyle {\frac {dU}{dt}}=-{\dot {Q}}} , where the rate of heat transfer out of the body, Q ˙ {\displaystyle {\dot {Q}}} , may be expressed by Newton's law of cooling, and where no work transfer occurs for an incompressible material. Thus, d T ( t ) d t = − h A C ( T ( t ) − T env ) = − 1 τ Δ T ( t ) , {\displaystyle {\frac {dT(t)}{dt}}=-{\frac {hA}{C}}(T(t)-T_{\text{env}})=-{\frac {1}{\tau }}\ \Delta T(t),} where the time constant of the system is τ = C / ( h A ) {\displaystyle \tau =C/(hA)} . The heat capacitance C {\displaystyle C} may be written in terms of the object's specific heat capacity , c {\displaystyle c} (J/kg-K), and mass, m {\displaystyle m} (kg). The time constant is then τ = m c / ( h A ) {\displaystyle \tau =mc/(hA)} . When the environmental temperature is constant in time, we may define Δ T ( t ) = T ( t ) − T env {\displaystyle \Delta T(t)=T(t)-T_{\text{env}}} . The equation becomes d T ( t ) d t = d Δ T ( t ) d t = − 1 τ Δ T ( t ) . {\displaystyle {\frac {dT(t)}{dt}}={\frac {d\Delta T(t)}{dt}}=-{\frac {1}{\tau }}\Delta T(t).} The solution of this differential equation, by integration from the initial condition, is Δ T ( t ) = Δ T ( 0 ) e − t / τ . {\displaystyle \Delta T(t)=\Delta T(0)\,e^{-t/\tau }.} where Δ T ( 0 ) {\displaystyle \Delta T(0)} is the temperature difference at time 0. Reverting to temperature, the solution is T ( t ) = T env + ( T ( 0 ) − T env ) e − t / τ . {\displaystyle T(t)=T_{\text{env}}+(T(0)-T_{\text{env}})\,e^{-t/\tau }.} The temperature difference between the body and the environment decays exponentially as a function of time. By defining r = 1 / τ = h A / C {\displaystyle r=1/\tau =hA/C} , the differential equation becomes T ˙ = r ( T env − T ( t ) ) , {\displaystyle {\dot {T}}=r\left(T_{\text{env}}-T(t)\right),} where Solving the initial-value problem using separation of variables gives T ( t ) = T env + ( T ( 0 ) − T env ) e − r t . {\displaystyle T(t)=T_{\text{env}}+(T(0)-T_{\text{env}})e^{-rt}.} See also:
https://en.wikipedia.org/wiki/Newton's_law_of_cooling
Newton's law of universal gravitation describes gravity as a force by stating that every particle attracts every other particle in the universe with a force that is proportional to the product of their masses and inversely proportional to the square of the distance between their centers of mass. Separated objects attract and are attracted as if all their mass were concentrated at their centers . The publication of the law has become known as the " first great unification ", as it marked the unification of the previously described phenomena of gravity on Earth with known astronomical behaviors. [ 1 ] [ 2 ] [ 3 ] This is a general physical law derived from empirical observations by what Isaac Newton called inductive reasoning . [ 4 ] It is a part of classical mechanics and was formulated in Newton's work Philosophiæ Naturalis Principia Mathematica (Latin for 'Mathematical Principles of Natural Philosophy' (the Principia )), first published on 5 July 1687. The equation for universal gravitation thus takes the form: F = G m 1 m 2 r 2 , {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}},} where F is the gravitational force acting between two objects, m 1 and m 2 are the masses of the objects, r is the distance between the centers of their masses , and G is the gravitational constant . The first test of Newton's law of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. [ 5 ] It took place 111 years after the publication of Newton's Principia and approximately 71 years after his death. Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws , where force is inversely proportional to the square of the distance between the bodies. Coulomb's law has charge in place of mass and a different constant. Newton's law was later superseded by Albert Einstein 's theory of general relativity , but the universality of the gravitational constant is intact and the law still continues to be used as an excellent approximation of the effects of gravity in most applications. Relativity is required only when there is a need for extreme accuracy, or when dealing with very strong gravitational fields, such as those found near extremely massive and dense objects, or at small distances (such as Mercury 's orbit around the Sun ). Before Newton's law of gravity, there were many theories explaining gravity. Philosophers made observations about things falling down − and developed theories why they do – as early as Aristotle who thought that rocks fall to the ground because seeking the ground was an essential part of their nature. [ 6 ] Around 1600, the scientific method began to take root. René Descartes started over with a more fundamental view, developing ideas of matter and action independent of theology. Galileo Galilei wrote about experimental measurements of falling and rolling objects. Johannes Kepler 's laws of planetary motion summarized Tycho Brahe 's astronomical observations. [ 7 ] : 132 Around 1666 Isaac Newton developed the idea that Kepler's laws must also apply to the orbit of the Moon around the Earth and then to all objects on Earth. The analysis required assuming that the gravitation force acted as if all of the mass of the Earth were concentrated at its center, an unproven conjecture at that time. His calculations of the Moon orbit time was within 16% of the known value. By 1680, new values for the diameter of the Earth improved his orbit time to within 1.6%, but more importantly Newton had found a proof of his earlier conjecture. [ 8 ] : 201 In 1687 Newton published his Principia which combined his laws of motion with new mathematical analysis to explain Kepler's empirical results. [ 7 ] : 134 His explanation was in the form of a law of universal gravitation: any two bodies are attracted by a force proportional to their mass and inversely proportional to their separation squared. [ 9 ] : 28 Newton's original formula was: F o r c e o f g r a v i t y ∝ m a s s o f o b j e c t 1 × m a s s o f o b j e c t 2 d i s t a n c e f r o m c e n t e r s 2 {\displaystyle {\rm {Force\,of\,gravity}}\propto {\frac {\rm {mass\,of\,object\,1\,\times \,mass\,of\,object\,2}}{\rm {distance\,from\,centers^{2}}}}} where the symbol ∝ {\displaystyle \propto } means "is proportional to". To make this into an equal-sided formula or equation, there needed to be a multiplying factor or constant that would give the correct force of gravity no matter the value of the masses or distance between them (the gravitational constant). Newton would need an accurate measure of this constant to prove his inverse-square law. When Newton presented Book 1 of the unpublished text in April 1686 to the Royal Society , Robert Hooke made a claim that Newton had obtained the inverse square law from him, ultimately a frivolous accusation. [ 8 ] : 204 While Newton was able to formulate his law of gravity in his monumental work, he was deeply uncomfortable with the notion of "action at a distance" that his equations implied. In 1692, in his third letter to Bentley, he wrote: "That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it." [ 10 ] [ 11 ] : 26 Newton's 1713 General Scholium in the second edition of Principia explains his model of gravity, translated in this case by Samuel Clarke : I have explained the Pharnomena of the Heavens and the Sea, by the Force of Gravity; but the Cause of Gravity I have not yet assigned. It is a Force arising from some Cause, which reaches to the very Centers of the Sun and Planets, without any diminution of its Force: And it acts, not proportionally to the Surfaces of the Particles it acts upon, as Mechanical Causes use to do; but proportionally to the Quantity of Solid Matter: And its Action reaches every way to immense Distances, decreasing always in a duplicate ratio of the Distances. But the Cause of these Properties of Gravity, I have not yet found deducible from Pharnomena: And Hypotheses I make not. [ 12 ] : 383 The last sentence is Newton's famous [ 12 ] and highly debated [ 13 ] Latin phrase Hypotheses non fingo . In other translations it comes out "I feign no hypotheses". [ 14 ] In modern language, the law states the following: F = G m 1 m 2 r 2 {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}}\ } where Assuming SI units , F is measured in newtons (N), m 1 and m 2 in kilograms (kg), r in meters (m), and the constant G is 6.674 30 (15) × 10 −11 m 3 ⋅kg −1 ⋅s −2 . [ 16 ] The value of the constant G was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G . [ 5 ] This experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. It took place 111 years after the publication of Newton's Principia and 71 years after Newton's death, so none of Newton's calculations could use the value of G ; instead he could only calculate a force relative to another force. If the bodies in question have spatial extent (as opposed to being point masses), then the gravitational force between them is calculated by summing the contributions of the notional point masses that constitute the bodies. In the limit, as the component point masses become "infinitely small", this entails integrating the force (in vector form, see below) over the extents of the two bodies . In this way, it can be shown that an object with a spherically symmetric distribution of mass exerts the same gravitational attraction on external bodies as if all the object's mass were concentrated at a point at its center. [ 15 ] (This is not generally true for non-spherically symmetrical bodies.) For points inside a spherically symmetric distribution of matter, Newton's shell theorem can be used to find the gravitational force. The theorem tells us how different parts of the mass distribution affect the gravitational force measured at a point located a distance r 0 from the center of the mass distribution: [ 17 ] As a consequence, for example, within a shell of uniform thickness and density there is no net gravitational acceleration anywhere within the hollow sphere. Newton's law of universal gravitation can be written as a vector equation to account for the direction of the gravitational force as well as its magnitude. In this formula, quantities in bold represent vectors. F 21 = − G m 1 m 2 | r 21 | 2 r ^ 21 = − G m 1 m 2 | r 21 | 3 r 21 {\displaystyle \mathbf {F} _{21}=-G{m_{1}m_{2} \over {|\mathbf {r} _{21}|}^{2}}{\hat {\mathbf {r} }}_{21}=-G{m_{1}m_{2} \over {|\mathbf {r} _{21}|}^{3}}\mathbf {r} _{21}} where It can be seen that the vector form of the equation is the same as the scalar form given earlier, except that F is now a vector quantity, and the right hand side is multiplied by the appropriate unit vector. Also, it can be seen that F 12 = − F 21 . The gravitational field is a vector field that describes the gravitational force that would be applied on an object in any given point in space, per unit mass. It is actually equal to the gravitational acceleration at that point. It is a generalisation of the vector form, which becomes particularly useful if more than two objects are involved (such as a rocket between the Earth and the Moon). For two objects (e.g. object 2 is a rocket, object 1 the Earth), we simply write r instead of r 12 and m instead of m 2 and define the gravitational field g ( r ) as: g ( r ) = − G m 1 | r | 2 r ^ {\displaystyle \mathbf {g} (\mathbf {r} )=-G{m_{1} \over {{\vert \mathbf {r} \vert }^{2}}}\,\mathbf {\hat {r}} } so that we can write: F ( r ) = m g ( r ) . {\displaystyle \mathbf {F} (\mathbf {r} )=m\mathbf {g} (\mathbf {r} ).} This formulation is dependent on the objects causing the field. The field has units of acceleration; in SI , this is m/s 2 . Gravitational fields are also conservative ; that is, the work done by gravity from one position to another is path-independent. This has the consequence that there exists a gravitational potential field V ( r ) such that g ( r ) = − ∇ V ( r ) . {\displaystyle \mathbf {g} (\mathbf {r} )=-\nabla V(\mathbf {r} ).} If m 1 is a point mass or the mass of a sphere with homogeneous mass distribution, the force field g ( r ) outside the sphere is isotropic, i.e., depends only on the distance r from the center of the sphere. In that case V ( r ) = − G m 1 r . {\displaystyle V(r)=-G{\frac {m_{1}}{r}}.} As per Gauss's law , field in a symmetric body can be found by the mathematical equation: where ∂ V {\displaystyle \partial V} is a closed surface and M enc {\displaystyle M_{\text{enc}}} is the mass enclosed by the surface. Hence, for a hollow sphere of radius R {\displaystyle R} and total mass M {\displaystyle M} , | g ( r ) | = { 0 , if r < R G M r 2 , if r ≥ R {\displaystyle |\mathbf {g(r)} |={\begin{cases}0,&{\text{if }}r<R\\\\{\dfrac {GM}{r^{2}}},&{\text{if }}r\geq R\end{cases}}} For a uniform solid sphere of radius R {\displaystyle R} and total mass M {\displaystyle M} , | g ( r ) | = { G M r R 3 , if r < R G M r 2 , if r ≥ R {\displaystyle |\mathbf {g(r)} |={\begin{cases}{\dfrac {GMr}{R^{3}}},&{\text{if }}r<R\\\\{\dfrac {GM}{r^{2}}},&{\text{if }}r\geq R\end{cases}}} Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore widely used. Deviations from it are small when the dimensionless quantities ϕ / c 2 {\displaystyle \phi /c^{2}} and ( v / c ) 2 {\displaystyle (v/c)^{2}} are both much less than one, where ϕ {\displaystyle \phi } is the gravitational potential , v {\displaystyle v} is the velocity of the objects being studied, and c {\displaystyle c} is the speed of light in vacuum. [ 19 ] For example, Newtonian gravity provides an accurate description of the Earth/Sun system, since ϕ c 2 = G M s u n r o r b i t c 2 ∼ 10 − 8 , ( v E a r t h c ) 2 = ( 2 π r o r b i t ( 1 y r ) c ) 2 ∼ 10 − 8 , {\displaystyle {\frac {\phi }{c^{2}}}={\frac {GM_{\mathrm {sun} }}{r_{\mathrm {orbit} }c^{2}}}\sim 10^{-8},\quad \left({\frac {v_{\mathrm {Earth} }}{c}}\right)^{2}=\left({\frac {2\pi r_{\mathrm {orbit} }}{(1\ \mathrm {yr} )c}}\right)^{2}\sim 10^{-8},} where r orbit {\displaystyle r_{\text{orbit}}} is the radius of the Earth's orbit around the Sun. In situations where either dimensionless parameter is large, then general relativity must be used to describe the system. General relativity reduces to Newtonian gravity in the limit of small potential and low velocities, so Newton's law of gravitation is often said to be the low-gravity limit of general relativity. The first two conflicts with observations above were explained by Einstein's theory of general relativity , in which gravitation is a manifestation of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, energy and momentum distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. This allowed a description of the motions of light and mass that was consistent with all available observations. In general relativity, the gravitational force is a fictitious force resulting from the curvature of spacetime , because the gravitational acceleration of a body in free fall is due to its world line being a geodesic of spacetime . In recent years, quests for non-inverse square terms in the law of gravity have been carried out by neutron interferometry . [ 21 ] The two-body problem has been completely solved, as has the restricted three-body problem . [ notes 1 ] The n-body problem is an ancient, classical problem [ 22 ] of predicting the individual motions of a group of celestial objects interacting with each other gravitationally . Solving this problem – from the time of the Greeks and on – has been motivated by the desire to understand the motions of the Sun , planets and the visible stars . The classical problem can be informally stated as: given the quasi-steady orbital properties ( instantaneous position, velocity and time ) [ citation needed ] [ notes 2 ] of a group of celestial bodies, predict their interactive forces; and consequently, predict their true orbital motions for all future times . [ citation needed ] [ notes 3 ] In the 20th century, understanding the dynamics of globular cluster star systems became an important n -body problem too. The n -body problem in general relativity is considerably more difficult to solve. [ citation needed ]
https://en.wikipedia.org/wiki/Newton's_law_of_universal_gravitation
Newton's metal is a fusible alloy with a low melting point . Its composition by weight is 8 parts bismuth , 5 parts lead and 3 parts tin ; its melting point is 97 °C. Newton's metal is comparable to Cerrobend , but avoids its toxic cadmium content. This has encouraged its use for medical applications for easily shaped shielding during radiotherapy . [ 1 ] This alloy-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Newton's_metal
Newton's minimal resistance problem is a problem of finding a solid of revolution which experiences a minimum resistance when it moves through a homogeneous fluid with constant velocity in the direction of the axis of revolution, named after Isaac Newton , who posed and solved the problem in 1685 and published it in 1687 in his Principia Mathematica . [ 1 ] The problem initiated the field of the calculus of variations , as Newton introduced the concept of calculus of variations, with the problem being the first to be formulated and correctly solved, [ 1 ] [ 2 ] [ 3 ] appearing a decade before the brachistochrone problem , in which Newton also solved using the calculus of variations. [ 4 ] [ 2 ] Newton published the solution in Principia Mathematica without his derivation, and David Gregory was the first person who approached Newton and persuaded him to write an analysis for him. Then the derivation was shared with his students and peers by Gregory. [ 5 ] According to I. Bernard Cohen , in his Guide to Newton’s Principia , "The key to Newton’s reasoning was found in the 1880s, when the earl of Portsmouth gave his family’s vast collection of Newton’s scientific and mathematical papers to Cambridge University. Among Newton’s manuscripts they found the draft text of a letter, … in which Newton elaborated his mathematical argument. [This] was never fully understood, however, until the publication of the major manuscript documents by D. T. Whiteside [1974], whose analytical and historical commentary has enabled students of Newton not only to follow fully Newton’s path to discovery and proof, but also Newton’s later (1694) recomputation of the surface of least resistance". [ 6 ] [ 7 ] Even though Newton's model for the fluid was wrong as per our current understanding, the fluid he had considered finds its application in hypersonic flow theory as a limiting case. [ 8 ] In Proposition 34 of Book 2 of the Principia, Newton wrote: "If in a rare medium, consisting of equal particles freely disposed at equal distances from each other, a globe and a cylinder described on equal diameter move with equal velocities in the direction of the axis of the cylinder, the resistance of the globe will be but half as great as that of the cylinder." Following this proposition is a scholium containing the famous condition that the curve which, when rotated about its axis, generates the solid that experiences less resistance than any other solid having a fixed length, and width. In modern form, Newton's problem is to minimize the following integral: [ 9 ] [ 10 ] where y ( x ) {\displaystyle y(x)} represents the curve which generates a solid when it is rotated about the x axis, and y ˙ = d y / d x . {\displaystyle {\dot {y}}=dy/dx.} I is the reduction in resistance caused by the particles impinging upon the sloping surface DNG, formed by rotating the curve, instead of perpendicularly upon the horizontal projection of DNG on the rear disc DA from the direction of motion, in Fig. 1 below. Note that the front of the solid is the disc BG, the triangles GBC and GBR are not part of it, but are used below by Newton to express the minimum condition. This integral is related to the total resistance experienced by the body by the following relation: The problem is to find the curve that generates the solid that experiences less resistance than any other solid having a fixed axial length L and a fixed width H . Since the solid must taper in the direction of motion, H is the radius of the disc forming the rear surface of the curve rotated about the x axis. The units are chosen so that the constant of proportionality is unity. Also, note that y ˙ < 0 , {\displaystyle {\dot {y}}<0,} and the integral, which is evaluated between x = 0 and x = L is negative. Let y = h when x = L . When the curve is the horizontal line DK, so the solid is a cylinder, y = H , y ˙ = 0 , {\displaystyle y=H,{\dot {y}}=0,} the integral is zero, and the resistance of the cylinder is ρ = H 2 / 2 , {\displaystyle \rho =H^{2}/2,} which explains the constant term. The simplest way to apply the Euler–Lagrange equation to this problem is to rewrite the resistance as where x ˙ = d x / d y {\displaystyle {\dot {x}}=dx/dy} , and the integral, which is evaluated between y = H and y = h < H , is negative. Substituting the integrand F ( y , x ˙ ) = y / ( 1 + x ˙ 2 ) {\displaystyle F(y,{\dot {x}})=y/(1+{\dot {x}}^{2})} into the Euler–Lagrange equation and it follows that x ˙ y ( 1 + x ˙ 2 ) 2 {\displaystyle {\frac {{\dot {x}}y}{(1+{\dot {x}}^{2})^{2}}}} is constant, and this can be written as where p = − y ˙ > 0 {\displaystyle p=-{\dot {y}}>0} , and K 1 > 0 {\displaystyle K_{1}>0} is a constant. Although the curves that satisfy the minimum condition cannot be described by a simple function y = f ( x ), they may be plotted using p as a parameter, to obtain the corresponding coordinates ( x , y ) of the curves. The equation of x as a function of p is obtained from the minimum condition (1), and an equivalent of it was first found by Newton. Differentiating: and integrating: where K 2 > 0 {\displaystyle K_{2}>0} is a constant. Since y = H {\displaystyle y=H} when x = 0 {\displaystyle x=0} , and y = h {\displaystyle y=h} when x = L {\displaystyle x=L} , the constants K 1 , K 2 {\displaystyle K_{1},K_{2}} can be determined in terms of H , h and L . Because y from equation (1) can never be zero or negative, the front surface of any solid satisfying the minimum condition must be a disc (GB in Fig. 2 above). As this was the first example of this type of problem, Newton had to invent a completely new method of solution. Also, he went much deeper in his analysis of the problem than simply finding the condition (1). While a solid of least resistance must satisfy (1), the converse is not true. Fig. 2 shows the family of curves that satisfy it for different values of K 1 > 0 {\displaystyle K_{1}>0} . As K 1 {\displaystyle K_{1}} increases the radius, Bg = h, of the disc at x = L decreases and the curve becomes steeper. Directly before the minimum resistance problem, Newton stated that if on any elliptical or oval figure rotated about its axis, p becomes greater than unity, one with less resistance can be found. This is achieved by replacing the part of the solid that has p > 1 with the frustum of a cone whose vertex angle is a right angle, as shown in Fig. 2 for curve D ν ϕ γ B {\displaystyle D\nu \phi \gamma B} . This has less resistance than D ν ϕ Γ B {\displaystyle D\nu \phi \Gamma B} . Newton does not prove this, but adds that it might have applications in shipbuilding. Whiteside supplies a proof and contends that Newton would have used the same reasoning. In Fig. 2, since the solid generated from the curve Dng satisfies the minimum condition and has p < 1 at g, it experiences less resistance than that from any other curve with the same end point g. However, for the curve DνΓ, with p > 1 at end point Γ, this is not the case for although the curve satisfies the minimum condition, the resistance experienced by φγ and γΓ together is less than that by φΓ. Newton concluded that of all solids that satisfy the minimum resistance condition, the one experiencing the least resistance, DNG in Fig. 2, is the one that has p = 1 at G. This is shown schematically in Fig. 3 where the overall resistance of the solid varies against the radius of the front surface disc, the minimum occurring when h = BG, corresponding to p = 1 at G. In the Principia, in Fig. 1 the condition for the minimum resistance solid is translated into a geometric form as follows: draw GR parallel to the tangent at N, so that p = G B B R {\displaystyle p={\frac {GB}{BR}}} , and equation (1) becomes: N M . G B 3 B R 3 ( 1 + ( G B / B R ) 2 ) 2 = N M . B R . G B 3 G R 4 = K 1 {\displaystyle {\frac {NM.GB^{3}}{BR^{3}(1+(GB/BR)^{2})^{2}}}={\frac {NM.BR.GB^{3}}{GR^{4}}}=K_{1}} At G, N M = B G {\displaystyle NM=BG} , B R = B C = B G {\displaystyle BR=BC=BG} , and G R 2 = 2 B G 2 {\displaystyle GR^{2}=2BG^{2}} , so N M . B R . G B 3 G R 4 = G B 4 = K 1 {\displaystyle {\frac {NM.BR.GB^{3}}{GR^{4}}}={\frac {GB}{4}}=K_{1}} which appears in the Principia in the form: Although this appears fairly simple, it has several subtleties that have caused much confusion. In Fig 4, assume DNSG is the curve that when rotated about AB generates the solid whose resistance is less than any other such solid with the same heights, AD = H, BG = h and length, AB = L. Fig. 5. shows the infinitesimal region of the curve about N and I in more detail. Although NI, Nj and NJ are really curved, they can be approximated by straight lines provided NH is sufficiently small. Let HM = y, AM = x, NH = u, and HI = w = dx. Let the tangent at each point on the curve, p = − d y d x = u w {\displaystyle p=-{\frac {dy}{dx}}={\frac {u}{w}}} . The reduction of the resistance of the sloping ring NI compared to the vertical ring NH rotated about AB is r = y p 3 1 + p 2 d x = y u 3 u 2 + w 2 {\displaystyle r={\frac {yp^{3}}{1+p^{2}}}dx={\frac {yu^{3}}{u^{2}+w^{2}}}} (2) Let the minimum resistance solid be replaced by an identical one, except that the arc between points I and K is shifted by a small distance to the right I J = K L = o > 0 {\displaystyle IJ=KL=o>0} , or to the left I j = K l = o < 0 {\displaystyle Ij=Kl=o<0} , as shown in more detail in Fig. 5. In either case, HI becomes H J , H j = w + o {\displaystyle HJ,Hj=w+o} . The resistance of the arcs of the curve DN and SG are unchanged. Also, the resistance of the arc IK is not changed by being shifted, since the slope remains the same along its length. The only change to the overall resistance of DNSG is due to the change to the gradient of arcs NI and KS. The 2 displacements have to be equal for the slope of the arc IK to be unaffected, and the new curve to end at G. The new resistance due to particles impinging upon NJ or Nj, rather that NI is: r + δ r = y u 3 u 2 + ( w + o ) 2 = r − 2 y w u 3 o ( u 2 + w 2 ) 2 {\displaystyle r+\delta r={\frac {yu^{3}}{u^{2}+(w+o)^{2}}}=r-{\frac {2ywu^{3}o}{(u^{2}+w^{2})^{2}}}} + w.(terms in ascending powers of o w {\displaystyle {\frac {o}{w}}} starting with the 2nd). The result is a change of resistance of: − 2 N M . H I . H N 3 o N I 4 {\displaystyle -{\frac {2NM.HI.HN^{3}o}{NI^{4}}}} + higher order terms, the resistance being reduced if o > 0 (NJ less resisted than NI). This is the original 1685 derivation where he obtains the above result using the series expansion in powers of o. In his 1694 revisit he differentiates (2) with respect to w. He sent details of his later approach to David Gregory, and these are included as an appendix in Motte’s translation of the Principia. Similarly, the change in resistance due to particles impinging upon SL or Sl rather that SK is: + 2 T S . O K . O S 3 o S K 4 {\displaystyle +{\frac {2TS.OK.OS^{3}o}{SK^{4}}}} + higher order terms. The overall change in the resistance of the complete solid, δ ρ = ( − 2 N M . H I . H N 3 N I 4 + 2 T S . O K . O S 3 S K 4 ) o {\displaystyle \delta \rho =(-{\frac {2NM.HI.HN^{3}}{NI^{4}}}+{\frac {2TS.OK.OS^{3}}{SK^{4}}})o} + w.(terms in ascending powers of o w {\displaystyle {\frac {o}{w}}} starting with the 2nd). Fig 6 represents the total resistance of DNJLSG, or DNjlSG as a function of o. Since the original curve DNIKSG has the least resistance, any change o of whatever sign, must result in an increase in the resistance. This is only possible if the coefficient of o in the expansion of ρ ( o ) {\displaystyle \rho (o)} is zero, so: N M . H I . H N 3 N I 4 = T S . O K . O S 3 S K 4 {\displaystyle {\frac {NM.HI.HN^{3}}{NI^{4}}}={\frac {TS.OK.OS^{3}}{SK^{4}}}} (2) If this was not the case, it would be possible to choose a value of o with a sign that produced a curve DNJLSG, or DNjlSG with less resistance than the original curve, contrary to the initial assumption. The approximation of taking straight lines for the finite arcs, NI and KS becomes exact in the limit as HN and OS approach zero. Also, NM and HM can be taken as equal, as can OT and ST. However, N and S on the original curve are arbitrary points, so for any 2 points anywhere on the curve the above equality must apply. This is only possible if in the limit of any infinitesimal arc HI, anywhere on the curve, the expression, N M . H I . H N 3 N I 4 {\displaystyle {\frac {NM.HI.HN^{3}}{NI^{4}}}} is a constant. (3) This has to be the case since, if N M . H I . H N 3 N I 4 {\displaystyle {\frac {NM.HI.HN^{3}}{NI^{4}}}} was to vary along the curve, it would be possible to find 2 infinitesimal arcs NI and KS such that (2) was false, and the coefficient of o in the expansion of δ ρ {\displaystyle \delta \rho } would be non-zero. Then a solid with less resistance could be produced by choosing a suitable value of o. This is the reason for the constant term in the minimum condition in (3). As noted above, Newton went further, and claimed that the resistance of the solid is less than that of any other with the same length and width, when the slope at G is equal to unity. Therefore, in this case, the constant in (3) is equal to one quarter of the radius of the front disc of the solid, G B 4 {\displaystyle {\frac {GB}{4}}} .
https://en.wikipedia.org/wiki/Newton's_minimal_resistance_problem
Isaac Newton 's sine-squared law of air resistance is a formula that implies the force on a flat plate immersed in a moving fluid is proportional to the square of the sine of the angle of attack . Although Newton did not analyze the force on a flat plate himself, the techniques he used for spheres, cylinders, and conical bodies were later applied to a flat plate to arrive at this formula. In 1687, Newton devoted the second volume of his Principia Mathematica to fluid mechanics. [ 1 ] The analysis assumes that the fluid particles are moving at a uniform speed prior to impacting the plate and then follow the surface of the plate after contact. Particles passing above and below the plate are assumed to be unaffected and any particle-to-particle interaction is ignored. This leads to the following formula: [ 2 ] where F is the force on the plate (oriented perpendicular to the plate), ρ {\displaystyle \rho } is the density of the fluid, v is the velocity of the fluid, S is the surface area of the plate, and α {\displaystyle \alpha } is the angle of attack. More sophisticated analysis and experimental evidence have shown that this formula is inaccurate; although Newton's analysis correctly predicted that the force was proportional to the density, the surface area of the plate, and the square of the velocity, the proportionality to the square of the sine of the angle of attack is incorrect. The force is directly proportional to the sine of the angle of attack, or for small values of α , α {\displaystyle \alpha ,\alpha } itself. [ 3 ] The assumed variation with the square of the sine predicted that the lift component would be much smaller than it actually is. This was frequently cited by detractors of heavier-than-air flight to "prove" it was impossible or impractical. Ironically, the sine squared formula has had a rebirth in modern aerodynamics; the assumptions of rectilinear flow and non-interactions between particles are applicable at hypersonic speeds and the sine-squared formula leads to reasonable predictions. [ 4 ] [ 5 ] [ 6 ] In 1744, 17-years after Newton's death, the French mathematician Jean le Rond d'Alembert attempted to use the mathematical methods of the day to describe and quantify the forces acting on a body moving relative to a fluid. It proved impossible and d'Alembert was forced to conclude that he could not devise a mathematical method to describe the force on a body, even though practical experience showed such a force always exists. This has become known as D'Alembert's paradox . [ 7 ]
https://en.wikipedia.org/wiki/Newton's_sine-square_law_of_air_resistance
Newton-X [ 1 ] [ 2 ] is a general program for molecular dynamics simulations beyond the Born-Oppenheimer approximation . It has been primarily used for simulations of ultrafast processes ( femtosecond to picosecond time scale) in photoexcited molecules. It has also been used for simulation of band envelops of absorption and emission spectra. Newton-X uses the trajectory surface hopping method, a semi-classical approximation in which the nuclei are treated classically by Newtonian dynamics , while the electrons are treated as a quantum subsystem via a local approximation of the Time-dependent Schrödinger Equation . Nonadiabatic effects (the spread of the nuclear wave packet between several states) are recovered by a stochastic algorithm, which allows individual trajectories to change between different potential energy states during the dynamics. Newton-X is designed as a platform to perform all steps of the nonadiabatic dynamics simulations, from the initial conditions generation, through trajectories computation, to the statistical analysis of the results. It works interfaced to a number of electronic structure programs available for computational chemistry , including Gaussian , Turbomole , Gamess , and Columbus . Its modular development allows to create new interfaces and integrate new methods. Users’ new developments are encouraged and are in due course included into the main branch of the program. Nonadiabatic couplings , the central quantity in nonadiabatic simulations, can be either provided by a third-party program or computed by Newton-X. When computed by Newton-X, it is done with a numerical approximation based on overlap of electronic wavefunctions obtained in sequential time steps. A local diabatization method is also available to provide couplings in the case of weak nonadiabatic interactions. [ 3 ] Hybrid combination of methods is possible in Newton-X. Forces computed with different methods for different atomic subsets can be linearly combined to generate the final force driving the dynamics. These hybrid forces may, for instance, be combined into the popular electrostatic-embedding quantum-mechanical/molecular-mechanical method ( QM/MM ). Important options for QM/MM simulations , such as link atoms, boundaries, and thermostats are available as well. As part of the initial conditions module, Newton-X can simulate absorption, emission, and photoelectron spectra, using the Nuclear Ensemble approach , [ 4 ] which provides full spectral widths and absolute intensities. Newton-X can simulate surface-hopping dynamics with the following programs and quantum-chemical methods: The surface hopping probability depends on the values of the nonadiabatic couplings between electronic states. Newton-X can either compute nonadiabatic couplings during the dynamics or read them from an interfaced third-party program. The computation of the couplings in Newton-X is done by finite differences, following the Hammes-Schiffer - Tully approach. [ 5 ] In this approach, the key quantity for computation of the surface hopping probability, the inner product between the nonadiabatic couplings ( τ LM ) and the nuclear velocities ( v ) at time t , is given by τ L M ⋅ v ≈ 1 4 Δ t ( 3 S L M ( t ) − 3 S M L ( t ) − S L M ( t − Δ t ) + S M L ( t − Δ t ) ) {\displaystyle {\boldsymbol {\tau }}_{LM}\cdot \mathbf {v} \approx {\frac {1}{4\Delta t}}\left(3S_{LM}(t)-3S_{ML}(t)-S_{LM}(t-\Delta t)+S_{ML}(t-\Delta t)\right)} , where the terms S L M ( t ) ≡ ⟨ Ψ L ( t − Δ t ) ∣ Ψ M ( t ) ⟩ {\displaystyle S_{LM}(t)\equiv \left\langle \Psi _{L}(t-\Delta t)\mid \Psi _{M}(t)\right\rangle } are wavefunction overlaps between states L and M in different time steps. This method can be generally used for any electronic-structure method, provided that a configuration interaction representation of the electronic wavefunction can be worked out. In Newton-X, it is used with a number of quantum-chemical methods, including MCSCF (Multiconfigurational Self-Consistent Field), MRCI (Multi-Reference Configuration Interaction), CC2 (Coupled Cluster to Approximated Second Order), ADC(2) (Algebraic Diagrammatic Construction to Second Order), TDDFT (Time-Dependent Density Functional Theory), and TDA (Tamm-Dankov Approximation). In the case of MCSCF and MRCI, the configuration interaction coefficients are directly used for computation of couplings. For the other methods, the linear-response amplitudes are used as the coefficients of a configuration interaction wavefunction with single excitations. Newton-X simulates absorption and emission spectra using the Nuclear Ensemble approach . [ 4 ] In this approach, an ensemble of nuclear geometries is built in the initial state and the transition energies and transition moments to the other states are computed for each geometry in the ensemble. A convolution of the results provides spectral widths and absolute intensities. In the Nuclear Ensemble approach, the photoabsorption cross section for a molecule initially in the ground state and being excited with photoenergy E into N fs final electronic states is given by σ ( E ) = π e 2 ℏ 2 m c ϵ 0 n r E ∑ n N f s 1 N p ∑ l N p Δ E 0 , n ( R l ) f 0 , n ( R l ) g ( E − Δ E 0 , n ( R l ) , δ ) {\displaystyle \sigma (E)={\frac {\pi e^{2}\hbar }{2mc\epsilon _{0}n_{r}E}}\sum _{n}^{N_{fs}}{\frac {1}{N_{p}}}\sum _{l}^{N_{p}}\Delta E_{0,n}(\mathbf {R} _{l})f_{0,n}(\mathbf {R} _{l})g\left(E-\Delta E_{0,n}(\mathbf {R} _{l}),\delta \right)} , where e is the elementary charge , ħ is the reduced Planck constant , m is the electron mass , c is the speed of light , ε 0 is the vacuum permittivity , and n r is the refractive index of the medium. The first summation runs over all target states and the second summation runs over all N p points in the nuclear ensemble. Each point in the ensemble has nuclear geometry R p , transition energy Δ E 0,n , and oscillator strength f 0,n (for a transition from the ground state into state n ). g is a normalized Gaussian function with width δ given by g ( E − Δ E 0 , n , δ ) = 1 ( 2 π ( δ / 2 ) 2 ) 1 / 2 e x p ( − ( E − Δ E 0 , n ) 2 2 ( δ / 2 ) 2 ) {\displaystyle g\left(E-\Delta E_{0,n},\delta \right)={\frac {1}{\left(2\pi (\delta /2)^{2}\right)^{1/2}}}exp\left({\frac {-(E-\Delta E_{0,n})^{2}}{2(\delta /2)^{2}}}\right)} . For emission, the differential emission rate is given by Γ ( E ) = e 2 n r 3 2 π ℏ m c 3 ϵ 0 1 N p ∑ l N p Δ E 1 , 0 ( R l ) 2 | f 1 , 0 ( R l ) | g ( E − Δ E 1 , 0 ( R l ) , δ ) {\displaystyle \Gamma (E)={\frac {e^{2}n_{r}^{3}}{2\pi \hbar mc^{3}\epsilon _{0}}}{\frac {1}{N_{p}}}\sum _{l}^{N_{p}}\Delta E_{1,0}(\mathbf {R} _{l})^{2}\left|f_{1,0}(\mathbf {R} _{l})\right|g\left(E-\Delta E_{1,0}(\mathbf {R} _{l}),\delta \right)} . In both absorption and emission, the nuclear ensemble can be sampled either from a dynamics simulation or from a Wigner distribution . Starting from version 2.0, it is possible to use the nuclear ensemble approach to simulate steady and time-resolved photoelectron spectra. The development of Newton-X started in 2005 at the Institute for the Theoretical Chemistry of the University of Vienna. It was designed by Mario Barbatti in collaboration with Hans Lischka. The original code used and expanded routines written by Giovanni Granucci and Maurizio Persico from the University of Pisa. [ 2 ] A modulus for computation of nonadiabatic couplings based on finite differences of either MCSCF or MRCI wavefunctions was implemented by Jiri Pittner (J. Heyrovsky Institute) [ 6 ] and later adapted to work with TDDFT . [ 7 ] A modulus for QM/MM dynamics was developed by Matthias Ruckenbauer. [ 8 ] Felix Plasser implemented the local diabatization method and dynamics based on CC2 and ADC(2). [ 3 ] Rachel Crespo-Otero extended the TDDFT and TDA capabilities. [ 3 ] An interface to Gamess was added by Aaron West and Theresa Windus (Iowa State University). [ 9 ] Mario Barbatti coordinates new program developments, their integration into the official version, and the Newton-X distribution. Newton-X is distributed free of charges for academic usage and with open source. The original paper [ 2 ] describing the program had been cited 190 times by December 22, 2014, according to Google Scholar . Newton-X counts with a comprehensive documentation and a public discussion forum . A tutorial is also available on line, showing how to use the main features of the program step-by-step. Examples of simulations are shown at a YouTube channel . The program itself is distributed with a collection of input and output files of several worked-out examples. A number of workshops on nonadiabatic simulations using Newton-X have been organized in Vienna (2008), Rio de Janeiro (2009), Sao Carlos (2011), Chiang Mai (2011, 2015), and Jeddah (2014). [ 10 ] A main concept guiding the Newton-X development is that the program should be simple to use, but still providing as many options as possible to customize the jobs. This is achieved by a series of input tools that guide the user through the program options, providing context-dependent variable values always that possible. Newton-X is written as a combination of independent programs. The coordinated execution of these programs is done by drivers written in Perl , while the programs dealing with integration of the dynamics and other mathematical aspects are written in Fortran 90 and C . Memory is dynamically allocated and there are no formal limits for most of variables, such as number of atoms or states. Newton-X works in a three-level parallelization : the first level is a trivial parallelization given by the Independent-Trajectories approach used by the program. Complete sets of input files are redundantly written to allow each trajectory to be executed independently. They can be easily merged for final analysis in a later step. In a second level, Newton-X takes advantage of the parallelization of the third-party programs with which it is interfaced. Thus, a Newton-X simulation using the interface with Gaussian program can be first distributed over a cluster in terms of independent trajectories and each trajectory runs parallelized version of Gaussian. In the third level, the coupling computations in Newton-X are parallelized. Starting with version (1.3, 2013), Newton-X uses meta-codes to control the dynamics simulation behavior. Based on a series of initial instructions provided by the user, new codes are automatically written and executed on-the-fly. These codes allow, for instance, checking specific conditions to terminate the simulations. To keep a modular architecture for easy inclusion of new algorithms, Newton-X is organized as a series of independent programs connected by general program drivers. For this reason, a large amount of input/output is required during the program's execution, reducing its efficiency. When dynamics is based on ab initio methods, this is normally not a problem, as the time bottleneck is in the electronic structure calculation. Low efficiency due to input/output can, however, be relevant with semiempirical methods. Other problems with the current implementation are the lack of parallelization of the code, especially of the couplings computation, and the restriction of the program to Linux systems.
https://en.wikipedia.org/wiki/Newton-X
The newton-second (also newton second ; symbol: N⋅s or N s ) [ 1 ] is the unit of impulse in the International System of Units (SI). It is dimensionally equivalent to the momentum unit kilogram-metre per second ( kg⋅m/s ). One newton-second corresponds to a one- newton force applied for one second . It can be used to identify the resultant velocity of a mass if a force accelerates the mass for a specific time interval. Momentum is given by the formula: This table gives the magnitudes of some momenta for various masses and speeds . This classical mechanics –related article is a stub . You can help Wikipedia by expanding it . This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Newton-second
The Newton Lacy Pierce Prize in Astronomy is awarded annually by the American Astronomical Society to a young (less than age 36) astronomer for outstanding achievement in observational astronomical research. The prize is named after Newton Lacy Pierce , an American astronomer. [ 1 ] Source: AAS
https://en.wikipedia.org/wiki/Newton_Lacy_Pierce_Prize_in_Astronomy
In mathematics , the Newton polygon is a tool for understanding the behaviour of polynomials over local fields , or more generally, over ultrametric fields. In the original case, the ultrametric field of interest was essentially the field of formal Laurent series in the indeterminate X , i.e. the field of fractions of the formal power series ring K [ [ X ] ] {\displaystyle K[[X]]} , over K {\displaystyle K} , where K {\displaystyle K} was the real number or complex number field. This is still of considerable utility with respect to Puiseux expansions . The Newton polygon is an effective device for understanding the leading terms a X r {\displaystyle aX^{r}} of the power series expansion solutions to equations P ( F ( X ) ) = 0 {\displaystyle P(F(X))=0} where P {\displaystyle P} is a polynomial with coefficients in K [ X ] {\displaystyle K[X]} , the polynomial ring ; that is, implicitly defined algebraic functions . The exponents r {\displaystyle r} here are certain rational numbers , depending on the branch chosen; and the solutions themselves are power series in K [ [ Y ] ] {\displaystyle K[[Y]]} with Y = X 1 d {\displaystyle Y=X^{\frac {1}{d}}} for a denominator d {\displaystyle d} corresponding to the branch. The Newton polygon gives an effective, algorithmic approach to calculating d {\displaystyle d} . After the introduction of the p-adic numbers , it was shown that the Newton polygon is just as useful in questions of ramification for local fields, and hence in algebraic number theory . Newton polygons have also been useful in the study of elliptic curves . A priori, given a polynomial over a field, the behaviour of the roots (assuming it has roots) will be unknown. Newton polygons provide one technique for the study of the behaviour of the roots. Let K {\displaystyle K} be a field endowed with a non-archimedean valuation v K : K → R ∪ { ∞ } {\displaystyle v_{K}:K\to \mathbb {R} \cup \{\infty \}} , and let with a 0 a n ≠ 0 {\displaystyle a_{0}a_{n}\neq 0} . Then the Newton polygon of f {\displaystyle f} is defined to be the lower boundary of the convex hull of the set of points P i = ( i , v K ( a i ) ) , {\displaystyle P_{i}=\left(i,v_{K}(a_{i})\right),} ignoring the points with a i = 0 {\displaystyle a_{i}=0} . Restated geometrically, plot all of these points P i on the xy -plane. Let's assume that the points indices increase from left to right ( P 0 is the leftmost point, P n is the rightmost point). Then, starting at P 0 , draw a ray straight down parallel with the y -axis, and rotate this ray counter-clockwise until it hits the point P k 1 (not necessarily P 1 ). Break the ray here. Now draw a second ray from P k 1 straight down parallel with the y -axis, and rotate this ray counter-clockwise until it hits the point P k 2 . Continue until the process reaches the point P n ; the resulting polygon (containing the points P 0 , P k 1 , P k 2 , ..., P k m , P n ) is the Newton polygon. Another, perhaps more intuitive way to view this process is this : consider a rubber band surrounding all the points P 0 , ..., P n . Stretch the band upwards, such that the band is stuck on its lower side by some of the points (the points act like nails, partially hammered into the xy plane). The vertices of the Newton polygon are exactly those points. For a neat diagram of this see Ch6 §3 of "Local Fields" by JWS Cassels, LMS Student Texts 3, CUP 1986. It is on p99 of the 1986 paperback edition. With the notations in the previous section, the main result concerning the Newton polygon is the following theorem, [ 1 ] which states that the valuation of the roots of f {\displaystyle f} are entirely determined by its Newton polygon: Let μ 1 , μ 2 , … , μ r {\displaystyle \mu _{1},\mu _{2},\ldots ,\mu _{r}} be the slopes of the line segments of the Newton polygon of f ( x ) {\displaystyle f(x)} (as defined above) arranged in increasing order, and let λ 1 , λ 2 , … , λ r {\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{r}} be the corresponding lengths of the line segments projected onto the x-axis (i.e. if we have a line segment stretching between the points P i {\displaystyle P_{i}} and P j {\displaystyle P_{j}} then the length is j − i {\displaystyle j-i} ). With the notation of the previous sections, we denote, in what follows, by L {\displaystyle L} the splitting field of f {\displaystyle f} over K {\displaystyle K} , and by v L {\displaystyle v_{L}} an extension of v K {\displaystyle v_{K}} to L {\displaystyle L} . Newton polygon theorem is often used to show the irreducibility of polynomials, as in the next corollary for example: Indeed, by the main theorem, if α {\displaystyle \alpha } is a root of f {\displaystyle f} , v L ( α ) = − a / n . {\displaystyle v_{L}(\alpha )=-a/n.} If f {\displaystyle f} were not irreducible over K {\displaystyle K} , then the degree d {\displaystyle d} of α {\displaystyle \alpha } would be < n {\displaystyle <n} , and there would hold v L ( α ) ∈ 1 d Z {\displaystyle v_{L}(\alpha )\in {1 \over d}\mathbb {Z} } . But this is impossible since v L ( α ) = − a / n {\displaystyle v_{L}(\alpha )=-a/n} with a {\displaystyle a} coprime to n {\displaystyle n} . Another simple corollary is the following: Proof: By the main theorem, f {\displaystyle f} must have a single root α {\displaystyle \alpha } whose valuation is v L ( α ) = − μ i . {\displaystyle v_{L}(\alpha )=-\mu _{i}.} In particular, α {\displaystyle \alpha } is separable over K {\displaystyle K} . If α {\displaystyle \alpha } does not belong to K {\displaystyle K} , α {\displaystyle \alpha } has a distinct Galois conjugate α ′ {\displaystyle \alpha '} over K {\displaystyle K} , with v L ( α ′ ) = v L ( α ) {\displaystyle v_{L}(\alpha ')=v_{L}(\alpha )} , [ 2 ] and α ′ {\displaystyle \alpha '} is a root of f {\displaystyle f} , a contradiction. More generally, the following factorization theorem holds: Proof: For every i {\displaystyle i} , denote by f i {\displaystyle f_{i}} the product of the monomials ( X − α ) {\displaystyle (X-\alpha )} such that α {\displaystyle \alpha } is a root of f {\displaystyle f} and v L ( α ) = − μ i {\displaystyle v_{L}(\alpha )=-\mu _{i}} . We also denote f = A P 1 k 1 P 2 k 2 ⋯ P s k s {\displaystyle f=AP_{1}^{k_{1}}P_{2}^{k_{2}}\cdots P_{s}^{k_{s}}} the factorization of f {\displaystyle f} in K [ X ] {\displaystyle K[X]} into prime monic factors ( A ∈ K ) . {\displaystyle (A\in K).} Let α {\displaystyle \alpha } be a root of f i {\displaystyle f_{i}} . We can assume that P 1 {\displaystyle P_{1}} is the minimal polynomial of α {\displaystyle \alpha } over K {\displaystyle K} . If α ′ {\displaystyle \alpha '} is a root of P 1 {\displaystyle P_{1}} , there exists a K-automorphism σ {\displaystyle \sigma } of L {\displaystyle L} that sends α {\displaystyle \alpha } to α ′ {\displaystyle \alpha '} , and we have v L ( σ α ) = v L ( α ) {\displaystyle v_{L}(\sigma \alpha )=v_{L}(\alpha )} since K {\displaystyle K} is Henselian. Therefore α ′ {\displaystyle \alpha '} is also a root of f i {\displaystyle f_{i}} . Moreover, every root of P 1 {\displaystyle P_{1}} of multiplicity ν {\displaystyle \nu } is clearly a root of f i {\displaystyle f_{i}} of multiplicity k 1 ν {\displaystyle k_{1}\nu } , since repeated roots share obviously the same valuation. This shows that P 1 k 1 {\displaystyle P_{1}^{k_{1}}} divides f i . {\displaystyle f_{i}.} Let g i = f i / P 1 k 1 {\displaystyle g_{i}=f_{i}/P_{1}^{k_{1}}} . Choose a root β {\displaystyle \beta } of g i {\displaystyle g_{i}} . Notice that the roots of g i {\displaystyle g_{i}} are distinct from the roots of P 1 {\displaystyle P_{1}} . Repeat the previous argument with the minimal polynomial of β {\displaystyle \beta } over K {\displaystyle K} , assumed w.l.g. to be P 2 {\displaystyle P_{2}} , to show that P 2 k 2 {\displaystyle P_{2}^{k_{2}}} divides g i {\displaystyle g_{i}} . Continuing this process until all the roots of f i {\displaystyle f_{i}} are exhausted, one eventually arrives to f i = P 1 k 1 ⋯ P m k m {\displaystyle f_{i}=P_{1}^{k_{1}}\cdots P_{m}^{k_{m}}} , with m ≤ s {\displaystyle m\leq s} . This shows that f i ∈ K [ X ] {\displaystyle f_{i}\in K[X]} , f i {\displaystyle f_{i}} monic. But the f i {\displaystyle f_{i}} are coprime since their roots have distinct valuations. Hence clearly f = A f 1 ⋅ f 2 ⋯ f r {\displaystyle f=Af_{1}\cdot f_{2}\cdots f_{r}} , showing the main contention. The fact that λ i = deg ⁡ ( f i ) {\displaystyle \lambda _{i}=\deg(f_{i})} follows from the main theorem, and so does the fact that μ i = v K ( f i ( 0 ) ) / λ i {\displaystyle \mu _{i}=v_{K}(f_{i}(0))/\lambda _{i}} , by remarking that the Newton polygon of f i {\displaystyle f_{i}} can have only one segment joining ( 0 , v K ( f i ( 0 ) ) {\displaystyle (0,v_{K}(f_{i}(0))} to ( λ i , 0 = v K ( 1 ) ) {\displaystyle (\lambda _{i},0=v_{K}(1))} . The condition for the irreducibility of f i {\displaystyle f_{i}} follows from the corollary above. (q.e.d.) The following is an immediate corollary of the factorization above, and constitutes a test for the reducibility of polynomials over Henselian fields: Other applications of the Newton polygon comes from the fact that a Newton Polygon is sometimes a special case of a Newton polytope , and can be used to construct asymptotic solutions of two-variable polynomial equations like 3 x 2 y 3 − x y 2 + 2 x 2 y 2 − x 3 y = 0. {\displaystyle 3x^{2}y^{3}-xy^{2}+2x^{2}y^{2}-x^{3}y=0.} In the context of a valuation, we are given certain information in the form of the valuations of elementary symmetric functions of the roots of a polynomial, and require information on the valuations of the actual roots, in an algebraic closure . This has aspects both of ramification theory and singularity theory . The valid inferences possible are to the valuations of power sums , by means of Newton's identities . Newton polygons are named after Isaac Newton , who first described them and some of their uses in correspondence from the year 1676 addressed to Henry Oldenburg . [ 4 ]
https://en.wikipedia.org/wiki/Newton_polygon
In the mathematical field of numerical analysis , a Newton polynomial , named after its inventor Isaac Newton , [ 1 ] is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method. Given a set of k + 1 data points where no two x j are the same, the Newton interpolation polynomial is a linear combination of Newton basis polynomials with the Newton basis polynomials defined as for j > 0 and n 0 ( x ) ≡ 1 {\displaystyle n_{0}(x)\equiv 1} . The coefficients are defined as where [ y 0 , … , y j ] {\displaystyle [y_{0},\ldots ,y_{j}]} are the divided differences defined as [ y k ] := y k , k ∈ { 0 , … , n } [ y k , … , y k + j ] := [ y k + 1 , … , y k + j ] − [ y k , … , y k + j − 1 ] x k + j − x k , k ∈ { 0 , … , n − j } , j ∈ { 1 , … , n } . {\displaystyle {\begin{aligned}{\mathopen {[}}y_{k}]&:=y_{k},&&k\in \{0,\ldots ,n\}\\{\mathopen {[}}y_{k},\ldots ,y_{k+j}]&:={\frac {[y_{k+1},\ldots ,y_{k+j}]-[y_{k},\ldots ,y_{k+j-1}]}{x_{k+j}-x_{k}}},&&k\in \{0,\ldots ,n-j\},\ j\in \{1,\ldots ,n\}.\end{aligned}}} Thus the Newton polynomial can be written as The Newton polynomial can be expressed in a simplified form when x 0 , x 1 , … , x k {\displaystyle x_{0},x_{1},\dots ,x_{k}} are arranged consecutively with equal spacing. If x 0 , x 1 , … , x k {\displaystyle x_{0},x_{1},\dots ,x_{k}} are consecutively arranged and equally spaced with x i = x 0 + i h {\displaystyle {x}_{i}={x}_{0}+ih} for i = 0, 1, ..., k and some variable x is expressed as x = x 0 + s h {\displaystyle {x}={x}_{0}+sh} , then the difference x − x i {\displaystyle x-x_{i}} can be written as ( s − i ) h {\displaystyle (s-i)h} . So the Newton polynomial becomes This is called the Newton forward divided difference formula . [ citation needed ] If the nodes are reordered as x k , x k − 1 , … , x 0 {\displaystyle {x}_{k},{x}_{k-1},\dots ,{x}_{0}} , the Newton polynomial becomes If x k , x k − 1 , … , x 0 {\displaystyle {x}_{k},\;{x}_{k-1},\;\dots ,\;{x}_{0}} are equally spaced with x i = x k − ( k − i ) h {\displaystyle {x}_{i}={x}_{k}-(k-i)h} for i = 0, 1, ..., k and x = x k + s h {\displaystyle {x}={x}_{k}+sh} , then, This is called the Newton backward divided difference formula . [ citation needed ] Newton's formula is of interest because it is the straightforward and natural differences-version of Taylor's polynomial. Taylor's polynomial tells where a function will go, based on its y value, and its derivatives (its rate of change, and the rate of change of its rate of change, etc.) at one particular x value. Newton's formula is Taylor's polynomial based on finite differences instead of instantaneous rates of change. For a polynomial p n {\displaystyle p_{n}} of degree less than or equal to n, that interpolates f {\displaystyle f} at the nodes x i {\displaystyle x_{i}} where i = 0 , 1 , 2 , 3 , ⋯ , n {\displaystyle i=0,1,2,3,\cdots ,n} . Let p n + 1 {\displaystyle p_{n+1}} be the polynomial of degree less than or equal to n+1 that interpolates f {\displaystyle f} at the nodes x i {\displaystyle x_{i}} where i = 0 , 1 , 2 , 3 , ⋯ , n , n + 1 {\displaystyle i=0,1,2,3,\cdots ,n,n+1} . Then p n + 1 {\displaystyle p_{n+1}} is given by: p n + 1 ( x ) = p n ( x ) + a n + 1 w n ( x ) {\displaystyle p_{n+1}(x)=p_{n}(x)+a_{n+1}w_{n}(x)} where w n ( x ) := ∏ i = 0 n ( x − x i ) {\textstyle w_{n}(x):=\prod _{i=0}^{n}(x-x_{i})} and a n + 1 := f ( x n + 1 ) − p n ( x n + 1 ) w n ( x n + 1 ) {\textstyle a_{n+1}:={f(x_{n+1})-p_{n}(x_{n+1}) \over w_{n}(x_{n+1})}} . Proof: This can be shown for the case where i = 0 , 1 , 2 , 3 , ⋯ , n {\displaystyle i=0,1,2,3,\cdots ,n} : p n + 1 ( x i ) = p n ( x i ) + a n + 1 ∏ j = 0 n ( x i − x j ) = p n ( x i ) {\displaystyle p_{n+1}(x_{i})=p_{n}(x_{i})+a_{n+1}\prod _{j=0}^{n}(x_{i}-x_{j})=p_{n}(x_{i})} and when i = n + 1 {\displaystyle i=n+1} : p n + 1 ( x n + 1 ) = p n ( x n + 1 ) + f ( x n + 1 ) − p n ( x n + 1 ) w n ( x n + 1 ) w n ( x n + 1 ) = f ( x n + 1 ) {\displaystyle p_{n+1}(x_{n+1})=p_{n}(x_{n+1})+{f(x_{n+1})-p_{n}(x_{n+1}) \over w_{n}(x_{n+1})}w_{n}(x_{n+1})=f(x_{n+1})} By the uniqueness of interpolated polynomials of degree less than n + 1 {\displaystyle n+1} , p n + 1 ( x ) = p n ( x ) + a n + 1 w n ( x ) {\textstyle p_{n+1}(x)=p_{n}(x)+a_{n+1}w_{n}(x)} is the required polynomial interpolation. The function can thus be expressed as: p n ( x ) = a 0 + a 1 ( x − x 0 ) + a 2 ( x − x 0 ) ( x − x 1 ) + ⋯ + a n ( x − x 0 ) ⋯ ( x − x n − 1 ) {\textstyle p_{n}(x)=a_{0}+a_{1}(x-x_{0})+a_{2}(x-x_{0})(x-x_{1})+\cdots +a_{n}(x-x_{0})\cdots (x-x_{n-1})} where the factors a i {\displaystyle a_{i}} are divided differences . Thus, Newton polynomials are used to provide polynomial interpolation formula of n points. [ 2 ] Taking y i = f ( x i ) {\displaystyle y_{i}=f(x_{i})} for some unknown function in Newton divided difference formulas, if the representation of x in the previous sections was instead taken to be x = x j + s h {\displaystyle x=x_{j}+sh} , in terms of forward differences , the Newton forward interpolation formula is expressed as: f ( x ) ≈ N ( x ) = N ( x j + s h ) = ∑ i = 0 k ( s i ) Δ ( i ) f ( x j ) {\displaystyle f(x)\approx N(x)=N(x_{j}+sh)=\sum _{i=0}^{k}{s \choose i}\Delta ^{(i)}f(x_{j})} whereas for the same in terms of backward differences , the Newton backward interpolation formula is expressed as: f ( x ) ≈ N ( x ) = N ( x j + s h ) = ∑ i = 0 k ( − 1 ) i ( − s i ) ∇ ( i ) f ( x j ) . {\displaystyle f(x)\approx N(x)=N(x_{j}+sh)=\sum _{i=0}^{k}{(-1)}^{i}{-s \choose i}\nabla ^{(i)}f(x_{j}).} This follows since relationship between divided differences and forward differences is given as: [ 3 ] [ y j , y j + 1 , … , y j + n ] = 1 n ! h n Δ ( n ) y j , {\displaystyle [y_{j},y_{j+1},\ldots ,y_{j+n}]={\frac {1}{n!h^{n}}}\Delta ^{(n)}y_{j},} whereas for backward differences, it is given as: [ citation needed ] [ y j , y j − 1 , … , y j − n ] = 1 n ! h n ∇ ( n ) y j . {\displaystyle [{y}_{j},y_{j-1},\ldots ,{y}_{j-n}]={\frac {1}{n!h^{n}}}\nabla ^{(n)}y_{j}.} As with other difference formulas, the degree of a Newton interpolating polynomial can be increased by adding more terms and points without discarding existing ones. Newton's form has the simplicity that the new points are always added at one end: Newton's forward formula can add new points to the right, and Newton's backward formula can add new points to the left. The accuracy of polynomial interpolation depends on how close the interpolated point is to the middle of the x values of the set of points used. Obviously, as new points are added at one end, that middle becomes farther and farther from the first data point. Therefore, if it isn't known how many points will be needed for the desired accuracy, the middle of the x-values might be far from where the interpolation is done. Gauss, Stirling, and Bessel all developed formulae to remedy that problem. [ 4 ] Gauss's formula alternately adds new points at the left and right ends, thereby keeping the set of points centered near the same place (near the evaluated point). When so doing, it uses terms from Newton's formula, with data points and x values renamed in keeping with one's choice of what data point is designated as the x 0 data point. Stirling's formula remains centered about a particular data point, for use when the evaluated point is nearer to a data point than to a middle of two data points. Bessel's formula remains centered about a particular middle between two data points, for use when the evaluated point is nearer to a middle than to a data point. Bessel and Stirling achieve that by sometimes using the average of two differences, and sometimes using the average of two products of binomials in x , where Newton's or Gauss's would use just one difference or product. Stirling's uses an average difference in odd-degree terms (whose difference uses an even number of data points); Bessel's uses an average difference in even-degree terms (whose difference uses an odd number of data points). For any given finite set of data points, there is only one polynomial of least possible degree that passes through all of them. Thus, it is appropriate to speak of the "Newton form", or Lagrange form , etc., of the interpolation polynomial. However, different methods of computing this polynomial can have differing computational efficiency. There are several similar methods, such as those of Gauss, Bessel and Stirling. They can be derived from Newton's by renaming the x -values of the data points, but in practice they are important. The choice between Bessel and Stirling depends on whether the interpolated point is closer to a data point, or closer to a middle between two data points. A polynomial interpolation's error approaches zero, as the interpolation point approaches a data-point. Therefore, Stirling's formula brings its accuracy improvement where it is least needed and Bessel brings its accuracy improvement where it is most needed. So, Bessel's formula could be said to be the most consistently accurate difference formula, and, in general, the most consistently accurate of the familiar polynomial interpolation formulas. Lagrange is sometimes said to require less work, and is sometimes recommended for problems in which it is known, in advance, from previous experience, how many terms are needed for sufficient accuracy. The divided difference methods have the advantage that more data points can be added, for improved accuracy. The terms based on the previous data points can continue to be used. With the ordinary Lagrange formula, to do the problem with more data points would require re-doing the whole problem. There is a "barycentric" version of Lagrange that avoids the need to re-do the entire calculation when adding a new data point. But it requires that the values of each term be recorded. But the ability, of Gauss, Bessel and Stirling, to keep the data points centered close to the interpolated point gives them an advantage over Lagrange, when it isn't known, in advance, how many data points will be needed. Additionally, suppose that one wants to find out if, for some particular type of problem, linear interpolation is sufficiently accurate. That can be determined by evaluating the quadratic term of a divided difference formula. If the quadratic term is negligible—meaning that the linear term is sufficiently accurate without adding the quadratic term—then linear interpolation is sufficiently accurate. If the problem is sufficiently important, or if the quadratic term is nearly big enough to matter, then one might want to determine whether the sum of the quadratic and cubic terms is large enough to matter in the problem. Of course, only a divided-difference method can be used for such a determination. For that purpose, the divided-difference formula and/or its x 0 point should be chosen so that the formula will use, for its linear term, the two data points between which the linear interpolation of interest would be done. The divided difference formulas are more versatile, useful in more kinds of problems. The Lagrange formula is at its best when all the interpolation will be done at one x value, with only the data points' y values varying from one problem to another, and when it is known, from past experience, how many terms are needed for sufficient accuracy. With the Newton form of the interpolating polynomial a compact and effective algorithm exists for combining the terms to find the coefficients of the polynomial. [ 5 ] When, with Stirling's or Bessel's, the last term used includes the average of two differences, then one more point is being used than Newton's or other polynomial interpolations would use for the same polynomial degree. So, in that instance, Stirling's or Bessel's is not putting an N −1 degree polynomial through N points, but is, instead, trading equivalence with Newton's for better centering and accuracy, giving those methods sometimes potentially greater accuracy, for a given polynomial degree, than other polynomial interpolations. For the special case of x i = i , there is a closely related set of polynomials, also called the Newton polynomials, that are simply the binomial coefficients for general argument. That is, one also has the Newton polynomials p n ( z ) {\displaystyle p_{n}(z)} given by In this form, the Newton polynomials generate the Newton series . These are in turn a special case of the general difference polynomials which allow the representation of analytic functions through generalized difference equations. Solving an interpolation problem leads to a problem in linear algebra where we have to solve a system of linear equations. Using a standard monomial basis for our interpolation polynomial we get the very complicated Vandermonde matrix . By choosing another basis, the Newton basis, we get a system of linear equations with a much simpler lower triangular matrix which can be solved faster. For k + 1 data points we construct the Newton basis as Using these polynomials as a basis for Π k {\displaystyle \Pi _{k}} we have to solve to solve the polynomial interpolation problem. This system of equations can be solved iteratively by solving While the interpolation formula can be found by solving a linear system of equations, there is a loss of intuition in what the formula is showing and why Newton's interpolation formula works is not readily apparent. To begin, we will need to establish two facts first: Fact 1. Reversing the terms of a divided difference leaves it unchanged: [ y 0 , … , y n ] = [ y n , … , y 0 ] . {\displaystyle [y_{0},\ldots ,y_{n}]=[y_{n},\ldots ,y_{0}].} The proof of this is an easy induction: for n = 1 {\displaystyle n=1} we compute [ y 0 , y 1 ] = [ y 1 ] − [ y 0 ] x 1 − x 0 = [ y 0 ] − [ y 1 ] x 0 − x 1 = [ y 1 , y 0 ] . {\displaystyle [y_{0},y_{1}]={\frac {[y_{1}]-[y_{0}]}{x_{1}-x_{0}}}={\frac {[y_{0}]-[y_{1}]}{x_{0}-x_{1}}}=[y_{1},y_{0}].} Induction step: Suppose the result holds for any divided difference involving at most n + 1 {\displaystyle n+1} terms. Then using the induction hypothesis in the following 2nd equality we see that for a divided difference involving n + 2 {\displaystyle n+2} terms we have [ y 0 , … , y n + 1 ] = [ y 1 , … , y n + 1 ] − [ y 0 , … , y n ] x n + 1 − x 0 = [ y n , … , y 0 ] − [ y n + 1 , … , y 1 ] x 0 − x n + 1 = [ y n + 1 , … , y 0 ] . {\displaystyle [y_{0},\ldots ,y_{n+1}]={\frac {[y_{1},\ldots ,y_{n+1}]-[y_{0},\ldots ,y_{n}]}{x_{n+1}-x_{0}}}={\frac {[y_{n},\ldots ,y_{0}]-[y_{n+1},\ldots ,y_{1}]}{x_{0}-x_{n+1}}}=[y_{n+1},\ldots ,y_{0}].} We formulate next Fact 2 which for purposes of induction and clarity we also call Statement n {\displaystyle n} ( Stm n {\displaystyle {\text{Stm}}_{n}} ) : Fact 2. ( Stm n {\displaystyle {\text{Stm}}_{n}} ) : If ( x 0 , y 0 ) , … , ( x n − 1 , y n − 1 ) {\displaystyle (x_{0},y_{0}),\ldots ,(x_{n-1},y_{n-1})} are any n {\displaystyle n} points with distinct x {\displaystyle x} -coordinates and P = P ( x ) {\displaystyle P=P(x)} is the unique polynomial of degree (at most) n − 1 {\displaystyle n-1} whose graph passes through these n {\displaystyle n} points then there holds the relation [ y 0 , … , y n ] ( x n − x 0 ) ⋅ … ⋅ ( x n − x n − 1 ) = y n − P ( x n ) {\displaystyle [y_{0},\ldots ,y_{n}](x_{n}-x_{0})\cdot \ldots \cdot (x_{n}-x_{n-1})=y_{n}-P(x_{n})} Proof. (It will be helpful for fluent reading of the proof to have the precise statement and its subtlety in mind: P {\displaystyle P} is defined by passing through ( x 0 , y 0 ) , . . . , ( x n − 1 , y n − 1 ) {\displaystyle (x_{0},y_{0}),...,(x_{n-1},y_{n-1})} but the formula also speaks at both sides of an additional arbitrary point ( x n , y n ) {\displaystyle (x_{n},y_{n})} with x {\displaystyle x} -coordinate distinct from the other x i {\displaystyle x_{i}} .) We again prove these statements by induction. To show Stm 1 , {\displaystyle {\text{Stm}}_{1},} let ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} be any one point and let P ( x ) {\displaystyle P(x)} be the unique polynomial of degree 0 passing through ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} . Then evidently P ( x ) = y 0 {\displaystyle P(x)=y_{0}} and we can write [ y 0 , y 1 ] ( x 1 − x 0 ) = y 1 − y 0 x 1 − x 0 ( x 1 − x 0 ) = y 1 − y 0 = y 1 − P ( x 1 ) {\displaystyle [y_{0},y_{1}](x_{1}-x_{0})={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}(x_{1}-x_{0})=y_{1}-y_{0}=y_{1}-P(x_{1})} as wanted. Proof of Stm n + 1 , {\displaystyle {\text{Stm}}_{n+1},} assuming Stm n {\displaystyle {\text{Stm}}_{n}} already established: Let P ( x ) {\displaystyle P(x)} be the polynomial of degree (at most) n {\displaystyle n} passing through ( x 0 , y 0 ) , … , ( x n , y n ) . {\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n}).} With Q ( x ) {\displaystyle Q(x)} being the unique polynomial of degree (at most) n − 1 {\displaystyle n-1} passing through the points ( x 1 , y 1 ) , … , ( x n , y n ) {\displaystyle (x_{1},y_{1}),\ldots ,(x_{n},y_{n})} , we can write the following chain of equalities, where we use in the penultimate equality that Stm n {\displaystyle _{n}} applies to Q {\displaystyle Q} : [ y 0 , … , y n + 1 ] ( x n + 1 − x 0 ) ⋅ … ⋅ ( x n + 1 − x n ) = [ y 1 , … , y n + 1 ] − [ y 0 , … , y n ] x n + 1 − x 0 ( x n + 1 − x 0 ) ⋅ … ⋅ ( x n + 1 − x n ) = ( [ y 1 , … , y n + 1 ] − [ y 0 , … , y n ] ) ( x n + 1 − x 1 ) ⋅ … ⋅ ( x n + 1 − x n ) = [ y 1 , … , y n + 1 ] ( x n + 1 − x 1 ) ⋅ … ⋅ ( x n + 1 − x n ) − [ y 0 , … , y n ] ( x n + 1 − x 1 ) ⋅ … ⋅ ( x n + 1 − x n ) = ( y n + 1 − Q ( x n + 1 ) ) − [ y 0 , … , y n ] ( x n + 1 − x 1 ) ⋅ … ⋅ ( x n + 1 − x n ) = y n + 1 − ( Q ( x n + 1 ) + [ y 0 , … , y n ] ( x n + 1 − x 1 ) ⋅ … ⋅ ( x n + 1 − x n ) ) . {\displaystyle {\begin{aligned}&[y_{0},\ldots ,y_{n+1}](x_{n+1}-x_{0})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&={\frac {[y_{1},\ldots ,y_{n+1}]-[y_{0},\ldots ,y_{n}]}{x_{n+1}-x_{0}}}(x_{n+1}-x_{0})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=\left([y_{1},\ldots ,y_{n+1}]-[y_{0},\ldots ,y_{n}]\right)(x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=[y_{1},\ldots ,y_{n+1}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})-[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=(y_{n+1}-Q(x_{n+1}))-[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=y_{n+1}-(Q(x_{n+1})+[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})).\end{aligned}}} The induction hypothesis for Q {\displaystyle Q} also applies to the second equality in the following computation, where ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} is added to the points defining Q {\displaystyle Q} : Q ( x 0 ) + [ y 0 , … , y n ] ( x 0 − x 1 ) ⋅ … ⋅ ( x 0 − x n ) = Q ( x 0 ) + [ y n , … , y 0 ] ( x 0 − x n ) ⋅ … ⋅ ( x 0 − x 1 ) = Q ( x 0 ) + y 0 − Q ( x 0 ) = y 0 = P ( x 0 ) . {\displaystyle {\begin{aligned}&Q(x_{0})+[y_{0},\ldots ,y_{n}](x_{0}-x_{1})\cdot \ldots \cdot (x_{0}-x_{n})\\&=Q(x_{0})+[y_{n},\ldots ,y_{0}](x_{0}-x_{n})\cdot \ldots \cdot (x_{0}-x_{1})\\&=Q(x_{0})+y_{0}-Q(x_{0})\\&=y_{0}\\&=P(x_{0}).\\\end{aligned}}} Now look at Q ( x ) + [ y 0 , … , y n ] ( x − x 1 ) ⋅ … ⋅ ( x − x n ) . {\displaystyle Q(x)+[y_{0},\ldots ,y_{n}](x-x_{1})\cdot \ldots \cdot (x-x_{n}).} By the definition of Q {\displaystyle Q} this polynomial passes through ( x 1 , y 1 ) , . . . , ( x n , y n ) {\displaystyle (x_{1},y_{1}),...,(x_{n},y_{n})} and, as we have just shown, it also passes through ( x 0 , y 0 ) . {\displaystyle (x_{0},y_{0}).} Thus it is the unique polynomial of degree ≤ n {\displaystyle \leq n} which passes through these points. Therefore this polynomial is P ( x ) ; {\displaystyle P(x);} i.e.: P ( x ) = Q ( x ) + [ y 0 , … , y n ] ( x − x 1 ) ⋅ … ⋅ ( x − x n ) . {\displaystyle P(x)=Q(x)+[y_{0},\ldots ,y_{n}](x-x_{1})\cdot \ldots \cdot (x-x_{n}).} Thus we can write the last line in the first chain of equalities as ` y n + 1 − P ( x n + 1 ) {\displaystyle y_{n+1}-P(x_{n+1})} ' and have thus established that [ y 0 , … , y n + 1 ] ( x n + 1 − x 0 ) ⋅ … ⋅ ( x n + 1 − x n ) = y n + 1 − P ( x n + 1 ) . {\displaystyle [y_{0},\ldots ,y_{n+1}](x_{n+1}-x_{0})\cdot \ldots \cdot (x_{n+1}-x_{n})=y_{n+1}-P(x_{n+1}).} So we established Stm n + 1 {\displaystyle {\text{Stm}}_{n+1}} , and hence completed the proof of Fact 2. Now look at Fact 2: It can be formulated this way: If P {\displaystyle P} is the unique polynomial of degree at most n − 1 {\displaystyle n-1} whose graph passes through the points ( x 0 , y 0 ) , . . . , ( x n − 1 , y n − 1 ) , {\displaystyle (x_{0},y_{0}),...,(x_{n-1},y_{n-1}),} then P ( x ) + [ y 0 , … , y n ] ( x − x 0 ) ⋅ … ⋅ ( x − x n − 1 ) {\displaystyle P(x)+[y_{0},\ldots ,y_{n}](x-x_{0})\cdot \ldots \cdot (x-x_{n-1})} is the unique polynomial of degree at most n {\displaystyle n} passing through points ( x 0 , y 0 ) , . . . , ( x n − 1 , y n − 1 ) , ( x n , y n ) . {\displaystyle (x_{0},y_{0}),...,(x_{n-1},y_{n-1}),(x_{n},y_{n}).} So we see Newton interpolation permits indeed to add new interpolation points without destroying what has already been computed. The limit of the Newton polynomial if all nodes coincide is a Taylor polynomial , because the divided differences become derivatives. lim ( x 0 , … , x n ) → ( z , … , z ) f [ x 0 ] + f [ x 0 , x 1 ] ⋅ ( ξ − x 0 ) + ⋯ + f [ x 0 , … , x n ] ⋅ ( ξ − x 0 ) ⋅ ⋯ ⋅ ( ξ − x n − 1 ) = f ( z ) + f ′ ( z ) ⋅ ( ξ − z ) + ⋯ + f ( n ) ( z ) n ! ⋅ ( ξ − z ) n {\displaystyle {\begin{aligned}&\lim _{(x_{0},\dots ,x_{n})\to (z,\dots ,z)}f[x_{0}]+f[x_{0},x_{1}]\cdot (\xi -x_{0})+\dots +f[x_{0},\dots ,x_{n}]\cdot (\xi -x_{0})\cdot \dots \cdot (\xi -x_{n-1})\\&=f(z)+f'(z)\cdot (\xi -z)+\dots +{\frac {f^{(n)}(z)}{n!}}\cdot (\xi -z)^{n}\end{aligned}}} As can be seen from the definition of the divided differences new data points can be added to the data set to create a new interpolation polynomial without recalculating the old coefficients. And when a data point changes we usually do not have to recalculate all coefficients. Furthermore, if the x i are distributed equidistantly the calculation of the divided differences becomes significantly easier. Therefore, the divided-difference formulas are usually preferred over the Lagrange form for practical purposes. The divided differences can be written in the form of a table. For example, for a function f is to be interpolated on points x 0 , … , x n {\displaystyle x_{0},\ldots ,x_{n}} . Write Then the interpolating polynomial is formed as above using the topmost entries in each column as coefficients. For example, suppose we are to construct the interpolating polynomial to f ( x ) = tan( x ) using divided differences, at the points Using six digits of accuracy, we construct the table Thus, the interpolating polynomial is Given more digits of accuracy in the table, the first and third coefficients will be found to be zero. Another example: The sequence f 0 {\displaystyle f_{0}} such that f 0 ( 1 ) = 6 , f 0 ( 2 ) = 9 , f 0 ( 3 ) = 2 {\displaystyle f_{0}(1)=6,f_{0}(2)=9,f_{0}(3)=2} and f 0 ( 4 ) = 5 {\displaystyle f_{0}(4)=5} , i.e., they are 6 , 9 , 2 , 5 {\displaystyle 6,9,2,5} from x 0 = 1 {\displaystyle x_{0}=1} to x 3 = 4 {\displaystyle x_{3}=4} . You obtain the slope of order 1 {\displaystyle 1} in the following way: As we have the slopes of order 1 {\displaystyle 1} , it is possible to obtain the next order: Finally, we define the slope of order 3 {\displaystyle 3} : Once we have the slope, we can define the consequent polynomials:
https://en.wikipedia.org/wiki/Newton_polynomial
In physics, Newtonian dynamics (also known as Newtonian mechanics ) is the study of the dynamics of a particle or a small body according to Newton's laws of motion . [ 1 ] [ 2 ] [ 3 ] Typically, the Newtonian dynamics occurs in a three-dimensional Euclidean space , which is flat. However, in mathematics Newton's laws of motion can be generalized to multidimensional and curved spaces. Often the term Newtonian dynamics is narrowed to Newton's second law m a = F {\displaystyle \displaystyle m\,\mathbf {a} =\mathbf {F} } . Consider N {\displaystyle \displaystyle N} particles with masses m 1 , … , m N {\displaystyle \displaystyle m_{1},\,\ldots ,\,m_{N}} in the regular three-dimensional Euclidean space . Let r 1 , … , r N {\displaystyle \displaystyle \mathbf {r} _{1},\,\ldots ,\,\mathbf {r} _{N}} be their radius-vectors in some inertial coordinate system. Then the motion of these particles is governed by Newton's second law applied to each of them The three-dimensional radius-vectors r 1 , … , r N {\displaystyle \displaystyle \mathbf {r} _{1},\,\ldots ,\,\mathbf {r} _{N}} can be built into a single n = 3 N {\displaystyle \displaystyle n=3N} -dimensional radius-vector. Similarly, three-dimensional velocity vectors v 1 , … , v N {\displaystyle \displaystyle \mathbf {v} _{1},\,\ldots ,\,\mathbf {v} _{N}} can be built into a single n = 3 N {\displaystyle \displaystyle n=3N} -dimensional velocity vector: In terms of the multidimensional vectors ( 2 ) the equations ( 1 ) are written as i.e. they take the form of Newton's second law applied to a single particle with the unit mass m = 1 {\displaystyle \displaystyle m=1} . Definition . The equations ( 3 ) are called the equations of a Newtonian dynamical system in a flat multidimensional Euclidean space , which is called the configuration space of this system. Its points are marked by the radius-vector r {\displaystyle \displaystyle \mathbf {r} } . The space whose points are marked by the pair of vectors ( r , v ) {\displaystyle \displaystyle (\mathbf {r} ,\mathbf {v} )} is called the phase space of the dynamical system ( 3 ). The configuration space and the phase space of the dynamical system ( 3 ) both are Euclidean spaces, i. e. they are equipped with a Euclidean structure. The Euclidean structure of them is defined so that the kinetic energy of the single multidimensional particle with the unit mass m = 1 {\displaystyle \displaystyle m=1} is equal to the sum of kinetic energies of the three-dimensional particles with the masses m 1 , … , m N {\displaystyle \displaystyle m_{1},\,\ldots ,\,m_{N}} : In some cases the motion of the particles with the masses m 1 , … , m N {\displaystyle \displaystyle m_{1},\,\ldots ,\,m_{N}} can be constrained. Typical constraints look like scalar equations of the form Constraints of the form ( 5 ) are called holonomic and scleronomic . In terms of the radius-vector r {\displaystyle \displaystyle \mathbf {r} } of the Newtonian dynamical system ( 3 ) they are written as Each such constraint reduces by one the number of degrees of freedom of the Newtonian dynamical system ( 3 ). Therefore, the constrained system has n = 3 N − K {\displaystyle \displaystyle n=3\,N-K} degrees of freedom. Definition . The constraint equations ( 6 ) define an n {\displaystyle \displaystyle n} -dimensional manifold M {\displaystyle \displaystyle M} within the configuration space of the Newtonian dynamical system ( 3 ). This manifold M {\displaystyle \displaystyle M} is called the configuration space of the constrained system. Its tangent bundle T M {\displaystyle \displaystyle TM} is called the phase space of the constrained system. Let q 1 , … , q n {\displaystyle \displaystyle q^{1},\,\ldots ,\,q^{n}} be the internal coordinates of a point of M {\displaystyle \displaystyle M} . Their usage is typical for the Lagrangian mechanics . The radius-vector r {\displaystyle \displaystyle \mathbf {r} } is expressed as some definite function of q 1 , … , q n {\displaystyle \displaystyle q^{1},\,\ldots ,\,q^{n}} : The vector-function ( 7 ) resolves the constraint equations ( 6 ) in the sense that upon substituting ( 7 ) into ( 6 ) the equations ( 6 ) are fulfilled identically in q 1 , … , q n {\displaystyle \displaystyle q^{1},\,\ldots ,\,q^{n}} . The velocity vector of the constrained Newtonian dynamical system is expressed in terms of the partial derivatives of the vector-function ( 7 ): The quantities q ˙ 1 , … , q ˙ n {\displaystyle \displaystyle {\dot {q}}^{1},\,\ldots ,\,{\dot {q}}^{n}} are called internal components of the velocity vector. Sometimes they are denoted with the use of a separate symbol and then treated as independent variables. The quantities are used as internal coordinates of a point of the phase space T M {\displaystyle \displaystyle TM} of the constrained Newtonian dynamical system. Geometrically, the vector-function ( 7 ) implements an embedding of the configuration space M {\displaystyle \displaystyle M} of the constrained Newtonian dynamical system into the 3 N {\displaystyle \displaystyle 3\,N} -dimensional flat configuration space of the unconstrained Newtonian dynamical system ( 3 ). Due to this embedding the Euclidean structure of the ambient space induces the Riemannian metric onto the manifold M {\displaystyle \displaystyle M} . The components of the metric tensor of this induced metric are given by the formula where ( , ) {\displaystyle \displaystyle (\ ,\ )} is the scalar product associated with the Euclidean structure ( 4 ). Since the Euclidean structure of an unconstrained system of N {\displaystyle \displaystyle N} particles is introduced through their kinetic energy, the induced Riemannian structure on the configuration space N {\displaystyle \displaystyle N} of a constrained system preserves this relation to the kinetic energy: The formula ( 12 ) is derived by substituting ( 8 ) into ( 4 ) and taking into account ( 11 ). For a constrained Newtonian dynamical system the constraints described by the equations ( 6 ) are usually implemented by some mechanical framework. This framework produces some auxiliary forces including the force that maintains the system within its configuration manifold M {\displaystyle \displaystyle M} . Such a maintaining force is perpendicular to M {\displaystyle \displaystyle M} . It is called the normal force . The force F {\displaystyle \displaystyle \mathbf {F} } from ( 6 ) is subdivided into two components The first component in ( 13 ) is tangent to the configuration manifold M {\displaystyle \displaystyle M} . The second component is perpendicular to M {\displaystyle \displaystyle M} . In coincides with the normal force N {\displaystyle \displaystyle \mathbf {N} } . Like the velocity vector ( 8 ), the tangent force F ∥ {\displaystyle \displaystyle \mathbf {F} _{\parallel }} has its internal presentation The quantities F 1 , … , F n {\displaystyle F^{1},\,\ldots ,\,F^{n}} in ( 14 ) are called the internal components of the force vector. The Newtonian dynamical system ( 3 ) constrained to the configuration manifold M {\displaystyle \displaystyle M} by the constraint equations ( 6 ) is described by the differential equations where Γ i j s {\displaystyle \Gamma _{ij}^{s}} are Christoffel symbols of the metric connection produced by the Riemannian metric ( 11 ). Mechanical systems with constraints are usually described by Lagrange equations : where T = T ( q 1 , … , q n , w 1 , … , w n ) {\displaystyle T=T(q^{1},\ldots ,q^{n},w^{1},\ldots ,w^{n})} is the kinetic energy the constrained dynamical system given by the formula ( 12 ). The quantities Q 1 , … , Q n {\displaystyle Q_{1},\,\ldots ,\,Q_{n}} in ( 16 ) are the inner covariant components of the tangent force vector F ∥ {\displaystyle \mathbf {F} _{\parallel }} (see ( 13 ) and ( 14 )). They are produced from the inner contravariant components F 1 , … , F n {\displaystyle F^{1},\,\ldots ,\,F^{n}} of the vector F ∥ {\displaystyle \mathbf {F} _{\parallel }} by means of the standard index lowering procedure using the metric ( 11 ): The equations ( 16 ) are equivalent to the equations ( 15 ). However, the metric ( 11 ) and other geometric features of the configuration manifold M {\displaystyle \displaystyle M} are not explicit in ( 16 ). The metric ( 11 ) can be recovered from the kinetic energy T {\displaystyle \displaystyle T} by means of the formula
https://en.wikipedia.org/wiki/Newtonian_dynamics
A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Stresses are proportional to the rate of change of the fluid's velocity vector . A fluid is Newtonian only if the tensors that describe the viscous stress and the strain rate are related by a constant viscosity tensor that does not depend on the stress state and velocity of the flow. If the fluid is also isotropic (i.e., its mechanical properties are the same along any direction), the viscosity tensor reduces to two real coefficients, describing the fluid's resistance to continuous shear deformation and continuous compression or expansion, respectively. Newtonian fluids are the easiest mathematical models of fluids that account for viscosity. While no real fluid fits the definition perfectly, many common liquids and gases, such as water and air, can be assumed to be Newtonian for practical calculations under ordinary conditions. However, non-Newtonian fluids are relatively common and include oobleck (which becomes stiffer when vigorously sheared) and non-drip paint (which becomes thinner when sheared ). Other examples include many polymer solutions (which exhibit the Weissenberg effect ), molten polymers, many solid suspensions, blood, and most highly viscous fluids. Newtonian fluids are named after Isaac Newton , who first used the differential equation to postulate the relation between the shear strain rate and shear stress for such fluids. An element of a flowing liquid or gas will endure forces from the surrounding fluid, including viscous stress forces that cause it to gradually deform over time. These forces can be mathematically first order approximated by a viscous stress tensor , usually denoted by τ {\displaystyle \tau } . The deformation of a fluid element, relative to some previous state, can be first order approximated by a strain tensor that changes with time. The time derivative of that tensor is the strain rate tensor , that expresses how the element's deformation is changing with time; and is also the gradient of the velocity vector field v {\displaystyle v} at that point, often denoted ∇ v {\displaystyle \nabla v} . The tensors τ {\displaystyle \tau } and ∇ v {\displaystyle \nabla v} can be expressed by 3×3 matrices , relative to any chosen coordinate system . The fluid is said to be Newtonian if these matrices are related by the equation τ = μ ( ∇ v ) {\displaystyle {\boldsymbol {\tau }}={\boldsymbol {\mu }}(\nabla v)} where μ {\displaystyle \mu } is a fixed 3×3×3×3 fourth order tensor that does not depend on the velocity or stress state of the fluid. For an incompressible and isotropic Newtonian fluid in laminar flow only in the direction x (i.e. where viscosity is isotropic in the fluid), the shear stress is related to the strain rate by the simple constitutive equation τ = μ d u d y {\displaystyle \tau =\mu {\frac {du}{dy}}} where In case of a general 2D incompressibile flow in the plane x, y, the Newton constitutive equation become: τ x y = μ ( ∂ u ∂ y + ∂ v ∂ x ) {\displaystyle \tau _{xy}=\mu \left({\frac {\partial u}{\partial y}}+{\frac {\partial v}{\partial x}}\right)} where: We can now generalize to the case of an incompressible flow with a general direction in the 3D space, the above constitutive equation becomes τ i j = μ ( ∂ v i ∂ x j + ∂ v j ∂ x i ) {\displaystyle \tau _{ij}=\mu \left({\frac {\partial v_{i}}{\partial x_{j}}}+{\frac {\partial v_{j}}{\partial x_{i}}}\right)} where or written in more compact tensor notation τ = μ ( ∇ u + ∇ u T ) {\displaystyle {\boldsymbol {\tau }}=\mu \left(\nabla \mathbf {u} +\nabla \mathbf {u} ^{T}\right)} where ∇ u {\displaystyle \nabla \mathbf {u} } is the flow velocity gradient. An alternative way of stating this constitutive equation is: where ε = 1 2 ( ∇ u + ∇ u T ) {\displaystyle {\boldsymbol {\varepsilon }}={\tfrac {1}{2}}\left(\mathbf {\nabla u} +\mathbf {\nabla u} ^{\mathrm {T} }\right)} is the rate-of- strain tensor . So this decomposition can be made explicit as: [ 5 ] This constitutive equation is also called the Newton law of viscosity . The total stress tensor σ {\displaystyle {\boldsymbol {\sigma }}} can always be decomposed as the sum of the isotropic stress tensor and the deviatoric stress tensor ( σ ′ {\displaystyle {\boldsymbol {\sigma }}'} ): σ = 1 3 tr ⁡ ( σ ) I + σ ′ {\displaystyle {\boldsymbol {\sigma }}={\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})\mathbf {I} +{\boldsymbol {\sigma }}'} In the incompressible case, the isotropic stress is simply proportional to the thermodynamic pressure p {\displaystyle p} : p = − 1 3 tr ⁡ ( σ ) = − 1 3 ∑ k σ k k {\displaystyle p=-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})=-{\frac {1}{3}}\sum _{k}\sigma _{kk}} and the deviatoric stress is coincident with the shear stress tensor τ {\displaystyle {\boldsymbol {\tau }}} : σ ′ = τ = μ ( ∇ u + ∇ u T ) {\displaystyle {\boldsymbol {\sigma }}'={\boldsymbol {\tau }}=\mu \left(\nabla \mathbf {u} +\nabla \mathbf {u} ^{T}\right)} The stress constitutive equation then becomes σ i j = − p δ i j + μ ( ∂ v i ∂ x j + ∂ v j ∂ x i ) {\displaystyle \sigma _{ij}=-p\delta _{ij}+\mu \left({\frac {\partial v_{i}}{\partial x_{j}}}+{\frac {\partial v_{j}}{\partial x_{i}}}\right)} or written in more compact tensor notation σ = − p I + μ ( ∇ u + ∇ u T ) {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\mu \left(\nabla \mathbf {u} +\nabla \mathbf {u} ^{T}\right)} where I {\displaystyle \mathbf {I} } is the identity tensor. The Newton's constitutive law for a compressible flow results from the following assumptions on the Cauchy stress tensor: [ 5 ] σ ( ε ) = − p I + λ tr ⁡ ( ε ) I + 2 μ ε {\displaystyle {\boldsymbol {\sigma }}({\boldsymbol {\varepsilon }})=-p\mathbf {I} +\lambda \operatorname {tr} ({\boldsymbol {\varepsilon }})\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}} where I {\textstyle \mathbf {I} } is the identity tensor , and tr ⁡ ( ε ) {\textstyle \operatorname {tr} ({\boldsymbol {\varepsilon }})} is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as: σ = − p I + λ ( ∇ ⋅ u ) I + μ ( ∇ u + ( ∇ u ) T ) . {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\lambda (\nabla \cdot \mathbf {u} )\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right).} Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow: tr ⁡ ( ε ) = ∇ ⋅ u . {\displaystyle \operatorname {tr} ({\boldsymbol {\varepsilon }})=\nabla \cdot \mathbf {u} .} Given this relation, and since the trace of the identity tensor in three dimensions is three: tr ⁡ ( I ) = 3. {\displaystyle \operatorname {tr} ({\boldsymbol {I}})=3.} the trace of the stress tensor in three dimensions becomes: tr ⁡ ( σ ) = − 3 p + ( 3 λ + 2 μ ) ∇ ⋅ u . {\displaystyle \operatorname {tr} ({\boldsymbol {\sigma }})=-3p+(3\lambda +2\mu )\nabla \cdot \mathbf {u} .} So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics: [ 6 ] σ = − [ p − ( λ + 2 3 μ ) ( ∇ ⋅ u ) ] I + μ ( ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ) {\displaystyle {\boldsymbol {\sigma }}=-\left[p-\left(\lambda +{\tfrac {2}{3}}\mu \right)\left(\nabla \cdot \mathbf {u} \right)\right]\mathbf {I} +\mu \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathrm {T} }-{\tfrac {2}{3}}\left(\nabla \cdot \mathbf {u} \right)\mathbf {I} \right)} Introducing the bulk viscosity ζ {\textstyle \zeta } , ζ ≡ λ + 2 3 μ , {\displaystyle \zeta \equiv \lambda +{\tfrac {2}{3}}\mu ,} we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics : [ 5 ] σ = − [ p − ζ ( ∇ ⋅ u ) ] I + μ [ ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ] {\displaystyle {\boldsymbol {\sigma }}=-[p-\zeta (\nabla \cdot \mathbf {u} )]\mathbf {I} +\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]} which can also be arranged in the other usual form: [ 7 ] σ = − p I + μ ( ∇ u + ( ∇ u ) T ) + ( ζ − 2 3 μ ) ( ∇ ⋅ u ) I . {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right)+\left(\zeta -{\frac {2}{3}}\mu \right)(\nabla \cdot \mathbf {u} )\mathbf {I} .} Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term: p = − 1 3 tr ⁡ ( σ ) + ζ ( ∇ ⋅ u ) {\displaystyle p=-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})+\zeta (\nabla \cdot \mathbf {u} )} and the deviatoric stress tensor σ ′ {\displaystyle {\boldsymbol {\sigma }}'} is still coincident with the shear stress tensor τ {\displaystyle {\boldsymbol {\tau }}} (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity: σ ′ = τ = μ [ ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ] {\displaystyle {\boldsymbol {\sigma }}'={\boldsymbol {\tau }}=\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]} Note that the incompressible case correspond to the assumption that the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with ∇ ⋅ u = 0 {\textstyle \nabla \cdot \mathbf {u} =0} . [ 8 ] So one returns to the expressions for pressure and deviatoric stress seen in the preceding paragraph. Both bulk viscosity ζ {\textstyle \zeta } and dynamic viscosity μ {\textstyle \mu } need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state . [ 9 ] Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion . In some cases, the second viscosity ζ {\textstyle \zeta } can be assumed to be constant in which case, the effect of the volume viscosity ζ {\textstyle \zeta } is that the mechanical pressure is not equivalent to the thermodynamic pressure : [ 10 ] as demonstrated below. ∇ ⋅ ( ∇ ⋅ u ) I = ∇ ( ∇ ⋅ u ) , {\displaystyle \nabla \cdot (\nabla \cdot \mathbf {u} )\mathbf {I} =\nabla (\nabla \cdot \mathbf {u} ),} p ¯ ≡ p − ζ ∇ ⋅ u , {\displaystyle {\bar {p}}\equiv p-\zeta \,\nabla \cdot \mathbf {u} ,} However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, [ 11 ] where second viscosity coefficient becomes important) by explicitly assuming ζ = 0 {\textstyle \zeta =0} . The assumption of setting ζ = 0 {\textstyle \zeta =0} is called as the Stokes hypothesis . [ 12 ] The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; [ 13 ] for other gases and liquids, Stokes hypothesis is generally incorrect. Finally, note that Stokes hypothesis is less restrictive that the one of incompressible flow. In fact, in the incompressible flow both the bulk viscosity term, and the shear viscosity term in the divergence of the flow velocity term disappears, while in the Stokes hypothesis the first term also disappears but the second one still remains. More generally, in a non-isotropic Newtonian fluid, the coefficient μ {\displaystyle \mu } that relates internal friction stresses to the spatial derivatives of the velocity field is replaced by a nine-element viscous stress tensor μ i j {\displaystyle \mu _{ij}} . There is general formula for friction force in a liquid: The vector differential of friction force is equal the viscosity tensor increased on vector product differential of the area vector of adjoining a liquid layers and rotor of velocity: d F = μ i j d S × ∇ × u {\displaystyle d\mathbf {F} =\mu _{ij}\,d\mathbf {S} \times \nabla \times \,\mathbf {u} } where μ i j {\displaystyle \mu _{ij}} is the viscosity tensor . The diagonal components of viscosity tensor is molecular viscosity of a liquid, and not diagonal components – turbulence eddy viscosity . [ 14 ] The following equation illustrates the relation between shear rate and shear stress for a fluid with laminar flow only in the direction x : τ x y = μ d v x d y , {\displaystyle \tau _{xy}=\mu {\frac {\mathrm {d} v_{x}}{\mathrm {d} y}},} where: If viscosity μ {\displaystyle \mu } does not vary with rate of deformation the fluid is Newtonian. The power law model is used to display the behavior of Newtonian and non-Newtonian fluids and measures shear stress as a function of strain rate. The relationship between shear stress, strain rate and the velocity gradient for the power law model are: τ x y = − m | γ ˙ | n − 1 d v x d y , {\displaystyle \tau _{xy}=-m\left|{\dot {\gamma }}\right|^{n-1}{\frac {dv_{x}}{dy}},} where If The relationship between the shear stress and shear rate in a casson fluid model is defined as follows: τ = τ 0 + S d V d y {\displaystyle {\sqrt {\tau }}={\sqrt {\tau _{0}}}+S{\sqrt {dV \over dy}}} where τ 0 is the yield stress and S = μ ( 1 − H ) α , {\displaystyle S={\sqrt {\frac {\mu }{(1-H)^{\alpha }}}},} where α depends on protein composition and H is the Hematocrit number. Water , air , alcohol , glycerol , and thin motor oil are all examples of Newtonian fluids over the range of shear stresses and shear rates encountered in everyday life. Single-phase fluids made up of small molecules are generally (although not exclusively) Newtonian.
https://en.wikipedia.org/wiki/Newtonian_fluid
In physics , the Newtonian limit is a mathematical approximation applicable to physical systems exhibiting (1) weak gravitation , (2) objects moving slowly compared to the speed of light, and (3) slowly changing (or completely static) gravitational fields. [ 1 ] Under these conditions, Newton's law of universal gravitation may be used to obtain values that are accurate. In general, and in the presence of significant gravitation, the general theory of relativity must be used. In the Newtonian limit, spacetime is approximately flat [ 1 ] and the Minkowski metric may be used over finite distances. In this case 'approximately flat' is defined as space in which gravitational effect approaches 0, mathematically actual spacetime and Minkowski space are not identical, Minkowski space is an idealized model. In special relativity, Newtonian behaviour can in most cases be obtained by performing the limit v → 0 {\displaystyle v\to 0} . In this limit, the often appearing gamma factor becomes 1 γ = 1 1 − v 2 / c 2 → 1 {\displaystyle {\begin{aligned}\gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}&\to 1\end{aligned}}} and the Lorentz transformations between reference frames turn into Galileo transformations t ′ = γ ( t − v / c 2 x ) → t ′ = t x ′ = γ ( x − v t ) → x ′ = x − v t {\displaystyle {\begin{aligned}t'=\gamma (t-v/c^{2}\,x)&\to t'=t\\x'=\gamma (x-v\,t)&\to x'=x-v\,t\end{aligned}}} The geodesic equation for a free particle on curved spacetime with metric g μ ν {\displaystyle g^{\mu \nu }} can be derived from the action S [ x , x ˙ ] = − m c ∫ d t − g μ ν x ˙ μ x ˙ ν {\displaystyle {\begin{aligned}S[x,{\dot {x}}]&=-m\,c\,\int dt\,{\sqrt {-g_{\mu \nu }\,{\dot {x}}^{\mu }\,{\dot {x}}^{\nu }}}\end{aligned}}} If the spacetime-metric is g = ( − 1 − 2 ϕ ( x ) c 2 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle {\begin{aligned}g&={\begin{pmatrix}-1-{\frac {2\,\phi (x)}{c^{2}}}&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}\end{aligned}}} then, ignoring all contributions of order 1 c 2 {\displaystyle {\frac {1}{c^{2}}}} the action becomes S [ x , x ˙ ] = − m c ∫ d t c 2 + 2 ϕ ( x ) − x ˙ 2 = − m c ∫ d t ( c 2 + 1 2 c 2 ( 2 ϕ ( x ) − x ˙ 2 ) + . . . ) ≈ ∫ d t ( − m c 2 + 1 2 m x ˙ 2 − m ϕ ( x ) ) {\displaystyle {\begin{aligned}S[x,{\dot {x}}]&=-m\,c\,\int dt\,{\sqrt {c^{2}+2\,\phi (x)-{\dot {\mathbf {x} }}^{2}}}\\&=-m\,c\,\int dt\,\left({\sqrt {c^{2}}}+{\frac {1}{2\,{\sqrt {c^{2}}}}}\,\left(2\,\phi (x)-{\dot {\mathbf {x} }}^{2}\right)+...\right)\\&\approx \int dt\,\left(-m\,c^{2}+{\frac {1}{2}}m{\dot {\mathbf {x} }}^{2}-m\,\phi (x)\right)\end{aligned}}} which is the action that reproduces the Newtonian equations of motion of a particle in a gravitational potential ϕ ( x ) {\displaystyle \phi (x)} [ 2 ] This relativity -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Newtonian_limit
With regard to materials science , a material is said to be " Newtonian " if it exhibits a linear relationship between stress and strain rate . [ 1 ]
https://en.wikipedia.org/wiki/Newtonian_material
In classical mechanics , the Newton–Euler equations describe the combined translational and rotational dynamics of a rigid body . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Traditionally the Newton–Euler equations is the grouping together of Euler's two laws of motion for a rigid body into a single equation with 6 components, using column vectors and matrices . These laws relate the motion of the center of gravity of a rigid body with the sum of forces and torques (or synonymously moments ) acting on the rigid body. With respect to a coordinate frame whose origin coincides with the body's center of mass for τ ( torque ) and an inertial frame of reference for F ( force ), they can be expressed in matrix form as: where With respect to a coordinate frame located at point P that is fixed in the body and not coincident with the center of mass, the equations assume the more complex form: where c is the vector from P to the center of mass of the body expressed in the body-fixed frame , and denote skew-symmetric cross product matrices . The left hand side of the equation—which includes the sum of external forces, and the sum of external moments about P —describes a spatial wrench , see screw theory . The inertial terms are contained in the spatial inertia matrix while the fictitious forces are contained in the term: [ 6 ] When the center of mass is not coincident with the coordinate frame (that is, when c is nonzero), the translational and angular accelerations ( a and α ) are coupled, so that each is associated with force and torque components. The Newton–Euler equations are used as the basis for more complicated "multi-body" formulations ( screw theory ) that describe the dynamics of systems of rigid bodies connected by joints and other constraints. Multi-body problems can be solved by a variety of numerical algorithms. [ 2 ] [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Newton–Euler_equations
In geometry , the Newton–Gauss line (or Gauss–Newton line ) is the line joining the midpoints of the three diagonals of a complete quadrilateral . The midpoints of the two diagonals of a convex quadrilateral with at most two parallel sides are distinct and thus determine a line, the Newton line . If the sides of such a quadrilateral are extended to form a complete quadrangle, the diagonals of the quadrilateral remain diagonals of the complete quadrangle and the Newton line of the quadrilateral is the Newton–Gauss line of the complete quadrangle. Any four lines in general position (no two lines are parallel, and no three are concurrent) form a complete quadrilateral . This configuration consists of a total of six points, the intersection points of the four lines, with three points on each line and precisely two lines through each point. [ 1 ] These six points can be split into pairs so that the line segments determined by any pair do not intersect any of the given four lines except at the endpoints. These three line segments are called diagonals of the complete quadrilateral. It is a well-known theorem that the three midpoints of the diagonals of a complete quadrilateral are collinear . [ 2 ] There are several proofs of the result based on areas [ 2 ] or wedge products [ 3 ] or, as the following proof, on Menelaus's theorem , due to Hillyer and published in 1920. [ 4 ] Let the complete quadrilateral ABCA'B'C' be labeled as in the diagram with diagonals AA' , BB' , CC' and their respective midpoints L, M, N . Let the midpoints of BC , CA' , A'B be P, Q, R respectively. Using similar triangles it is seen that QR intersects AA' at L , RP intersects BB' at M and PQ intersects CC' at N . Again, similar triangles provide the following proportions, However, the line A'B'C intersects the sides of triangle △ ABC , so by Menelaus's theorem the product of the terms on the right hand sides is −1. Thus, the product of the terms on the left hand sides is also −1 and again by Menelaus's theorem, the points L, M, N are collinear on the sides of triangle △ PQR . The following are some results that use the Newton–Gauss line of complete quadrilaterals that are associated with cyclic quadrilaterals , based on the work of Barbu and Patrascu. [ 5 ] Given any cyclic quadrilateral ABCD , let point F be the point of intersection between the two diagonals AC and BD . Extend the diagonals AB and CD until they meet at the point of intersection, E . Let the midpoint of the segment EF be N , and let the midpoint of the segment BC be M (Figure 1). If the midpoint of the line segment BF is P , the Newton–Gauss line of the complete quadrilateral ABCDEF and the line PM determine an angle ∠ PMN equal to ∠ EFD . First show that the triangles △ NPM , △ EDF are similar . Since BE ∥ PN and FC ∥ PM , we know ∠ NPM = ∠ EAC . Also, B E ¯ P N ¯ = F C ¯ P M ¯ = 2. {\displaystyle {\tfrac {\overline {BE}}{\overline {PN}}}={\tfrac {\overline {FC}}{\overline {PM}}}=2.} In the cyclic quadrilateral ABCD , these equalities hold: Therefore, ∠ NPM = ∠ EDF . Let R 1 , R 2 be the radii of the circumcircles of △ EDB , △ FCD respectively. Apply the law of sines to the triangles, to obtain: Since BE = 2 · PN and FC = 2 · PM , this shows the equality P N ¯ P M ¯ = D E ¯ D F ¯ . {\displaystyle {\tfrac {\overline {PN}}{\overline {PM}}}={\tfrac {\overline {DE}}{\overline {DF}}}.} The similarity of triangles △ PMN , △ DFE follows, and ∠ NMP = ∠ EFD . If Q is the midpoint of the line segment FC , it follows by the same reasoning that ∠ NMQ = ∠ EFA . The line through E parallel to the Newton–Gauss line of the complete quadrilateral ABCDEF and the line EF are isogonal lines of ∠ BEC , that is, each line is a reflection of the other about the angle bisector . [ 5 ] (Figure 2) Triangles △ EDF , △ NPM are similar by the above argument, so ∠ DEF = ∠ PNM . Let E' be the point of intersection of BC and the line parallel to the Newton–Gauss line NM through E . Since PN ∥ BE and NM ∥ EE', ∠ BEF = ∠ PNF , and ∠ FNM = ∠ E'EF . Therefore, Let G and H be the orthogonal projections of the point F on the lines AB and CD respectively. The quadrilaterals MPGN and MQHN are cyclic quadrilaterals. [ 5 ] ∠ EFD = ∠ PMN , as previously shown. The points P and N are the respective circumcenters of the right triangles △ BFG , △ EFG . Thus, ∠ PGF = ∠ PFG and ∠ FGN = ∠ GFN . Therefore, Therefore, MPGN is a cyclic quadrilateral, and by the same reasoning, MQHN also lies on a circle. Extend the lines GF, HF to intersect EC, EB at I, J respectively (Figure 4). The complete quadrilaterals EFGHIJ and ABCDEF have the same Newton–Gauss line. [ 5 ] The two complete quadrilaterals have a shared diagonal, EF . N lies on the Newton–Gauss line of both quadrilaterals. N is equidistant from G and H , since it is the circumcenter of the cyclic quadrilateral EGFH . If triangles △ GMP , △ HMQ are congruent , and it will follow that M lies on the perpendicular bisector of the line HG . Therefore, the line MN contains the midpoint of GH , and is the Newton–Gauss line of EFGHIJ . To show that the triangles △ GMP , △ HMQ are congruent, first observe that PMQF is a parallelogram , since the points M, P are midpoints of BF , BC respectively. Therefore, Also note that Hence, Therefore, △ GMP and △ HMQ are congruent by SAS. Due to △ GMP , △ HMQ being congruent triangles, their circumcircles MPGN, MQHN are also congruent . The point at infinity along the Newton–Gauss line is the isogonal conjugate of the Miquel point. Dao Thanh Oai showed a generalization of the Newton–Gauss line. [ 6 ] For a triangle ABC , let l an arbitrary line and A 0 B 0 C 0 the Cevian triangle of an arbitrary point P . l intersects BC, CA , and AB at A 1 , B 1 , and C 1 respectively. Then AA 1 ∩ B 0 C 0 , BB 1 ∩ C 0 A 0 , and CC 1 ∩ A 0 B 0 are colinear . If P is the centroid of the triangle ABC , the line is Newton–Gauss line of the quadrilateral composed of AB, BC, CA, and l . The Newton–Gauss line proof was developed by the two mathematicians it is named after: Sir Isaac Newton and Carl Friedrich Gauss . [ citation needed ] The initial framework for this theorem is from the work of Newton , in his previous theorem on the Newton line , in which Newton showed that the center of a conic inscribed in a quadrilateral lies on the Newton–Gauss line. [ 7 ] The theorem of Gauss and Bodenmiller states that the three circles whose diameters are the diagonals of a complete quadrilateral are coaxal . [ 8 ]
https://en.wikipedia.org/wiki/Newton–Gauss_line
The Newton–Pepys problem is a probability problem concerning the probability of throwing sixes from a certain number of dice. [ 1 ] In 1693 Samuel Pepys and Isaac Newton corresponded over a problem posed to Pepys by a school teacher named John Smith. [ 2 ] The problem was: Which of the following three propositions has the greatest chance of success? Pepys initially thought that outcome C had the highest probability, but Newton correctly concluded that outcome A actually has the highest probability. The probabilities of outcomes A, B and C are: [ 1 ] These results may be obtained by applying the binomial distribution (although Newton obtained them from first principles). In general, if P( n ) is the probability of throwing at least n sixes with 6 n dice, then: As n grows, P( n ) decreases monotonically towards an asymptotic limit of 1/2. The solution outlined above can be implemented in R as follows: Although Newton correctly calculated the odds of each bet, he provided a separate intuitive explanation to Pepys. He imagined that B and C toss their dice in groups of six, and said that A was most favorable because it required a 6 in only one toss, while B and C required a 6 in each of their tosses. This explanation assumes that a group does not produce more than one 6, so it does not actually correspond to the original problem. [ 3 ] A natural generalization of the problem is to consider n non-necessarily fair dice, with p the probability that each die will select the 6 face when thrown (notice that actually the number of faces of the dice and which face should be selected are irrelevant). If r is the total number of dice selecting the 6 face, then P ( r ≥ k ; n , p ) {\displaystyle P(r\geq k;n,p)} is the probability of having at least k correct selections when throwing exactly n dice. Then the original Newton–Pepys problem can be generalized as follows: Let ν 1 , ν 2 {\displaystyle \nu _{1},\nu _{2}} be natural positive numbers s.t. ν 1 ≤ ν 2 {\displaystyle \nu _{1}\leq \nu _{2}} . Is then P ( r ≥ ν 1 k ; ν 1 n , p ) {\displaystyle P(r\geq \nu _{1}k;\nu _{1}n,p)} not smaller than P ( r ≥ ν 2 k ; ν 2 n , p ) {\displaystyle P(r\geq \nu _{2}k;\nu _{2}n,p)} for all n, p, k ? Notice that, with this notation, the original Newton–Pepys problem reads as: is P ( r ≥ 1 ; 6 , 1 / 6 ) ≥ P ( r ≥ 2 ; 12 , 1 / 6 ) ≥ P ( r ≥ 3 ; 18 , 1 / 6 ) {\displaystyle P(r\geq 1;6,1/6)\geq P(r\geq 2;12,1/6)\geq P(r\geq 3;18,1/6)} ? As noticed in Rubin and Evans (1961), there are no uniform answers to the generalized Newton–Pepys problem since answers depend on k, n and p . There are nonetheless some variations of the previous questions that admit uniform answers: (from Chaundy and Bullard (1960)): [ 4 ] If k 1 , k 2 , n {\displaystyle k_{1},k_{2},n} are positive natural numbers, and k 1 < k 2 {\displaystyle k_{1}<k_{2}} , then P ( r ≥ k 1 ; k 1 n , 1 n ) > P ( r ≥ k 2 ; k 2 n , 1 n ) {\displaystyle P(r\geq k_{1};k_{1}n,{\frac {1}{n}})>P(r\geq k_{2};k_{2}n,{\frac {1}{n}})} . If k , n 1 , n 2 {\displaystyle k,n_{1},n_{2}} are positive natural numbers, and n 1 < n 2 {\displaystyle n_{1}<n_{2}} , then P ( r ≥ k ; k n 1 , 1 n 1 ) > P ( r ≥ k ; k n 2 , 1 n 2 ) {\displaystyle P(r\geq k;kn_{1},{\frac {1}{n_{1}}})>P(r\geq k;kn_{2},{\frac {1}{n_{2}}})} . (from Varagnolo, Pillonetto and Schenato (2013)): [ 5 ] If ν 1 , ν 2 , n , k {\displaystyle \nu _{1},\nu _{2},n,k} are positive natural numbers, and ν 1 ≤ ν 2 , k ≤ n , p ∈ [ 0 , 1 ] {\displaystyle \nu _{1}\leq \nu _{2},k\leq n,p\in [0,1]} then P ( r = ν 1 k ; ν 1 n , p ) ≥ P ( r = ν 2 k ; ν 2 n , p ) {\displaystyle P(r=\nu _{1}k;\nu _{1}n,p)\geq P(r=\nu _{2}k;\nu _{2}n,p)} .
https://en.wikipedia.org/wiki/Newton–Pepys_problem
NexTView was an electronic program guide for the analog domain, introduced in 1995 and based on Teletext Level 2.5 / Hi-Text . [ 1 ] [ 2 ] It was used by TV programme listings for all of the major networks in Germany, Austria, France and Switzerland. The transmission protocol was based on teletext , [ 3 ] [ 4 ] however, using a compact binary format instead of preformatted text pages. The advantage compared to paper-based TV magazines was that the user had an immediate overview of the current and next programmes, and was able to search through the programme database, filtering results by categories. The nxtvepg software [ 5 ] enabled nexTView to be viewed using a personal computer . Some TV manufacturers that implemented this solution were: Grundig , Loewe , Metz , Philips , Sony , Thomson , and Quelle Universum . From 1997 to October 2013, NexTView was broadcast on Swiss Television channels and on French-language channels whose teletext services were managed from Swiss Television (SwissText) (TV5, M6, Canal+). [ 6 ] This article about television technology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/NexTView
NextBus is a public transit vehicle tracking system which uses global positioning satellite information to predict when the next vehicle will arrive at any given transit stop, which attempts to reduce wait times and reliance on schedules. NextBus is developed by Nextbus Information Systems, Inc., a subsidiary of Cubic Transportation Systems , for buses , trams , light rail operations and other public transport vehicles. The company was founded by Ken Schmier, Bryce Nesbitt and Paul Freda in 1997 in Emeryville, California with U.S. Patents 6,006,159 & 6,374,176. As of 2013, the company systems were operating in over 130 locations. NextBus Information Systems, Inc. was previously a subsidiary of Webtech Wireless, Inc. and was acquired by Cubic Transportation Systems in January 2013. [ 1 ] Each vehicle is fitted with a Global Positioning System (GPS) receiver, which transmits speed and location data to a central location. There, a computer running proprietary software calculates the projected arrival times for all stops in the system using this data, along with configuration information and historic travel times. These times are then converted to a 'wait time' and made available via the NextBus website and electronic signs at bus stops and tram stops as well as cell phones, and other wireless devices via the Internet . NextBus provides a real-time passenger information system for all routes for several major transportation agencies including the San Francisco Municipal Railway , Los Angeles County Metropolitan Transportation Authority , the Massachusetts Bay Transportation Authority , and the Toronto Transit Commission . The system is also available in many other smaller universities and public transportation agencies for a total of approximately 100 systems. This website-related article is a stub . You can help Wikipedia by expanding it . This article about transport is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/NextBus
NextGenPower is an integrated project which aims to demonstrate new alloys and coatings in boiler, turbine and interconnecting pipework. The concept of NextGenPower is to perform innovative demonstrations that will significantly contribute to the EU target to increase the efficiency in existing and new build pulverized coal power plants. Carbon Capture and Storage (CCS) is envisaged to be the main transition technology to comply with the CO 2 reduction targets set by the European Commission . [ 1 ] However, CCS has the drawback that the electrical efficiency of the coal-fired power plant will drop significantly. The efficiency loss caused by CCS in coal-fired power plants will range from 4 to 12% points, depending on the CCS technology chosen. To overcome this drawback, one has to increase the plant efficiency or the share of biomass co-firing. Both options are limited due to the quality of the current available coatings and materials. Live steam temperatures well in excess of 700 °C are necessary to compensate the efficiency loss caused by CCS and to achieve a net efficiency of 45%. NextGenPower aims to develop and demonstrate coatings and materials that can be applied in ultra-supercritical (in excess of 700˚C) conditions. The NextGenPower project was due to start on 1 May 2010 and have a duration of 48 months. The budget is €10.3million, with an EU contribution making up €6million of the budget. [ 2 ] The following scientific and technological objectives have been defined for NextGenPower, leading to the following project activities: There are also four sub-projects which will be focused on throughout the course of the NextGenPower project. NextGenPower aims at overcoming fireside corrosion and steamside oxidation in high temperature parts through the application of suitable coatings. The main goal for Sub Project 1 is to demonstrate the benefits and limitations of materials and coatings for the fireside under biomass co-firing conditions as well as for the boiler and main steam pipework under USC and current steam conditions. The main goals for Sub Project 2 are to select the best candidate alloys for the HP and ID steam turbines operating at high steam temperatures (≥720˚C). A number of nickel-base alloys have been developed whose properties have been proven at the laboratory scale and for small-scale components. The main uncertainty in the application of these alloys for steam turbine applications is the ability to manufacture, weld and inspect large components. The performance in service presents a much smaller risk since there is confidence that the mechanical behaviour can be modelled on the basis of the material properties. This philosophy follows the approach applied in the development, demonstration and exploitation of materials technology for 700-720˚C steam turbines in other projects ( AD700 , COMTES , EON 50plus) where the first commercial steam turbine will enter service without prior operation in a test loop. Following alloy selection, full-scale steam turbine casings and rotor forgings will be manufactured and materials properties demonstrated through implementation of a mechanical testing programme. Full-scale demonstration of the welding technology and the NDE capability required for welded rotor and casing manufacture will also be carried out. Sub Project 3 provides a framework for the testing and demonstration work in the overall project. It will review the expected operating parameters required for NGP plants, with and without CO 2 capture technologies, and with and without biomass co-firing. The aim is to evaluate a series of NextGenPower plants with CCS systems in terms of their power generation efficiencies and CO 2 emissions per unit of electricity generated. The main goal for Sub Project 4 is to ensure that the generic results and results from topical activities are actively disseminated. It promotes results and approaches and encourages the duplication in other, thereby contributing to EU objectives of the CO 2 reduction, efficiency improvement and security of energy supply. Another objective is to facilitate the sharing of policies, approaches and knowledge between the participants.
https://en.wikipedia.org/wiki/NextGenPower
NextNav, Inc. is the developer of a 3D geolocation service known as Metropolitan Beacon System (MBS), a wide-area location and timing technology designed to provide services in areas where GPS or other satellite location signals cannot be reliably received. MBS consumes significantly less power than GPS and includes high-precision altitude. [ citation needed ] In the United States, NextNav operates its MBS network over its spectrum licenses in the 920-928 MHz band. [ 3 ] [ 4 ] [ 5 ] [ 6 ] The company went public on Nasdaq in October 2021 with a merger with special-purpose acquisition company Spartacus Acquisition Corporation. [ 7 ] NextNav distributed its Pinnacle vertical location service in January 2021, which provides floor-level vertical location using barometric sensors from cell phones and other devices. [ citation needed ] Their Pinnacle network was distributed in partnership with AT&T and is in more than 4,400 cities across the United States. [ citation needed ] The larger NextNav network uses Metropolitan Beacon System technology to deliver high-precision three-dimensional indoor location capabilities across a market area. MBS is built on principles similar to GPS transmitting precisely timed signals from a network of wide-area beacons enabling receivers to use trilateration techniques to determine their precise locations. Due to the terrestrial placement of the transmitters and the sub-GHz nature of the signal, MBS signals can travel several kilometers and—because the network is specifically designed, deployed, and managed for indoor positioning—can be reliably received in deep indoor conditions that block satellite signals (e.g., GPS, GLONASS ). [ citation needed ] MBS signals also enable location to be computed with far lower power drain than GPS . [ citation needed ] In addition, the system incorporates barometric pressure compensation technology that allows receivers equipped with pressure sensors to compute their altitude very precisely, typically within a floor. A byproduct of the GPS-like operating principles of NextNav's MBS network is the ability to deliver high-precision (Stratum-1-level) timing to indoor locations or in the event of GPS outages. MBS receivers are being commercialized as an additional constellation added to multi-constellation GNSS processors. [ citation needed ] Today's GPS processors typically process additional satellite constellations, and the MBS processing capability constitutes primarily firmware additions. [ citation needed ] The performance of the technology under emergency dialing conditions was originally demonstrated in the CSRIC III test bed in San Francisco in 2012, with performance enhancements added on an ongoing basis. [ citation needed ] More recently the technology was enabled in the primary global telecommunication standards bodies, 3GPP (Release 13) [ 8 ] and OMA (SUPL 2.0.3). [ 9 ] MBS signal technology is available under FRAND terms. The technology can be scaled for any location application, including services to mobile phones, the Internet of Things, and enterprise and public safety applications. [ citation needed ] On March 11, 2024, NextNav announced it signed an agreement to acquire spectrum licenses covering an additional 4 MHz in the lower 900 MHz band (902-928) from Telesaurus Holdings GB LLC, and Skybridge Spectrum Foundation. NextNav acquired the additional spectrum licenses for a total purchase price of up to $50 million, paid for through a combination of cash and NextNav common stock. The acquired licenses are in the same lower 900 MHz band as NextNav's current licensed spectrum. On April 16, 2024, NextNav filed a rulemaking petition with the Federal Communications Commission to deliver a spectrum solution in the Lower 900 MHz band on the grounds that it would facilitate a terrestrial positioning, navigation, and timing network (as a complement and backup to GPS) and broadband. As of September 5th, 2024 the comment period had closed and was strongly opposed by American Radio Relay League as well as the LoRa -based Meshtastic community which also operate in the 902-928Mhz band. [ 10 ] [ 11 ] NextNav's Urban and Indoor Positioning service TerraPoiNT is available in San Francisco Bay Area, McLean, VA and other select markets. [ citation needed ] Its Pinnacle vertical location service is available in more than 4,400 cities nationwide and has partnered with AT&T FirstNet to provide vertical location service for First Responders . [ citation needed ] In November 2022, NextNav has recently completed the acquisition of a geolocation system provider based in France, specializing in low-power technologies, Nestwave. [ 12 ]
https://en.wikipedia.org/wiki/NextNav
The Next Generation Mobile Networks (NGMN) Alliance is a mobile telecommunications association of mobile operators, vendors, manufacturers and research institutes. It was founded by major mobile operators in 2006 as an open forum to evaluate candidate technologies to develop a common view of solutions for the next evolution of wireless networks. Its objective is to ensure the successful commercial launch of future mobile broadband networks through a roadmap for technology and friendly user trials. Its office is in Frankfurt , Germany . [ 1 ] The NGMN Alliance complements and supports standards organizations by providing a coherent view of what mobile operators require. The alliance's project results have been acknowledged by groups such as the 3rd Generation Partnership Project (3GPP), TeleManagement Forum (TM Forum) and the Institute of Electrical and Electronics Engineers (IEEE). [ 2 ] [ 3 ] The Initial phase of the NGMN Alliance involved working groups on technology, spectrum, intellectual property rights (IPR), ecosystem, and trials, to enable the launch of commercial next generation mobile services in 2010. In a white paper first released in March 2006, NGMN summarized a vision for mobile broadband communications and included recommendations as well as requirements. It provided operators´ relative priorities of key system characteristics, system recommendations and detailed requirements for the standards for the next generation of mobile broadband networks, devices and services. [ 4 ] From July 2007 to February 2008, standards and technologies were evaluated for next generation mobile networks. These were 3GPP Long Term Evolution (LTE) and its System Architecture Evolution (SAE), IEEE 802.16e (products known as WiMax ), 802.20 , and Ultra Mobile Broadband . In June 2008, the NGMN Alliance announced that, “based on a thorough technology evaluation, the NGMN board concluded that LTE/SAE is the first technology which broadly meets its requirements as defined in the NGMN white paper. The NGMN Alliance therefore approves LTE/SAE as its first compliant technology”. [ 5 ] [ 6 ] [ 7 ] Also in June 2008 the alliance announced it would work with the Femto Forum to ensure femtocells benefit from the technology. [ 8 ] [ 9 ] The alliance worked on intellectual property rights "to adapt the existing IPR regime to provide a better predictability of the IPR licenses (...) to ensure Fair, Reasonable And Non-Discriminatory ( FRAND ) IPR costs". [ 10 ] As part of this work, it issued a public request for information on LTE patent pool administration. [ 11 ] [ 12 ] The alliance provided input to the International Telecommunication Union (ITU) World Radiocommunication Conference (WRC) on frequency allocation, since they considered a timely and globally aligned spectrum allocation policy a key to the development of a viable ecosystem on a national, regional and global scale. The ITU and regional bodies are developing channeling arrangements for the frequency bands identified at the ITU World Radio Conference in 2007. In October 2009, the NGMN spectrum working group released “Next Generation Mobile Networks Spectrum Requirements Update”, containing the status and NGMN views and requirements on frequency bands identified at the ITU WRC-07. [ 13 ] Since next generation devices, networks and services need to be synchronized for a successful launch, NGMN in February 2009 released a white paper which provided generic definitions for next generation (data only) devices to ensure that devices were available at the time when first networks were launched in 2010. [ 14 ] After the launch of the first LTE networks in 2010, [ 15 ] the alliance then addressed challenges of network deployment, operations, and interworking, while focusing on LTE and its evolved packet core, as defined by the System Architecture Evolution. In September 2010, NGMN published recommendations on operational aspects of next generation networks. Rising complexity and increasing cost of network operations due to heterogeneity of networks (supporting different technologies), number of network elements, the market need to gain flexibility in service management and to improve service quality drive the need to improve the overall network operations. The document outlines requirements for self-organizing network functionalities and operations and maintenance (O&M) to address these issues. [ 16 ] [ 17 ] In 2014, the NGMN Board decided to focus future NGMN activities on defining the end-to-end requirements for 5G. [ 18 ] [ circular reference ] A global team has developed the NGMN 5G White Paper [ 19 ] (published March 2015) delivering consolidated operator requirements to support the standardisation and development of 5G. NGMN encouraged the industry to have 5G solutions available by 2020, which was exceeded by some operators that already launched 5G trials in 2019. The commercial introduction of 5G naturally varied from operator to operator. In 2015, NGMN launched a 5G-focused work-programme that built on and further evolved the White Paper guidelines. The main 5G NGMN work-items for 2015 were: the development of technical 5G requirements and architectural design principles, the analysis of potential 5G solutions and the assessment of future use-cases and business models. [ 20 ] Furthermore, the NGMN project teams addressed the areas IPR and Spectrum from a 5G perspective. In September 2015, the NGMN published a Q&A about 5G: [ 21 ] In 2020, NGMN published an updated White Paper on 5G, requesting a common platform architecture to allow edge computing to be used on a global scale. Furthermore the NGMN Alliance positioned itself to provide a fully integrated solution for Verticals that encompasses networks, clouds and platforms, with dynamic customisation, partnerships, end-to-end management, carrier-grade security and efficient spectrum use. At the same time the organisation highlighted that an increased focus needs to be given to further improving energy efficiency, sustainability, social wellbeing, trust, and digital inclusion. [ 22 ] [ 23 ] In October 2020, the NGMN Alliance launched a project on 6G [ 24 ] as well as a project on sustainability [ 25 ] A new strategy was announced by the NGMN Alliance in February 2021, focusing on disaggregation, sustainability and 6G while still supporting the implementation of 5G’s full potential. [ 26 ] As a first outcome of that strategy, a project was launched to analyse the impact of disaggregation and cloudification on the operating model of mobile network operators and - in a pre-competitive environment - to develop operating model blueprints for enabling a successful E2E operation of disaggregated networks.5G Network The NGMN Alliance is organized as an association of more than 80 partners from the mobile telecommunications industry and research. About one third are mobile operators, representing well over one half of the total mobile subscriber base world-wide. The remainder comprises vendors and manufacturers accounting for more than 90% of the global footprint of mobile network development as well as universities or non-industrial research institutes. [ 3 ] [ 27 ] The NGMN Alliance co-operates with standards bodies and industry organisations like 3GPP, [ 28 ] the European Telecommunications Standards Institute , the GSM Association , and the TM Forum. [ 3 ] In July 2010, the alliance and the TM Forum agreed to work together on optimized management systems and operations of the next generation of mobile networks. [ 29 ] In May 2011 the alliance became a market representation partner to 3GPP. In December 2014 ETSI and NGMN signed a cooperation agreement to intensify the dialogue and exchange of information between the two organizations. [ 30 ] in June 2019, the EMEA Satellite Operators Associations (ESOA) and NGMN Alliance agreed to cooperate in the area of integration of satellite solutions in the 5G ecosystem. [ 31 ] In October 2020, the NGMN Alliance and the O-RAN Alliance signed a cooperation agreement,cooperating in the area of Radio Access Network decomposition of 4G and 5G networks. [ 32 ] In May 2021, the Linux Foundation and the NGMN Alliance signed a Memorandum of Understanding (MoU) for formal collaboration regarding end-to-end 5G and beyond. [ 33 ]
https://en.wikipedia.org/wiki/Next_Generation_Mobile_Networks
The Nexus Tools Platform or NTP is a web-based inventory platform that allows an interactive comparison of environmental models in a statistical way. Developed by the UNU Institute for Integrated Management of Material Fluxes and of Resources (UNU-FLORES) , [ 1 ] the platform helps a user to analyze existing modelling tools related to environmental resource management and associated nexus perspectives, such as a Water-Energy-Food Nexus . As a result, the user can select the most appropriate tools to fit the research needs. Considering an increase in the global population , the demand for both food and water is expected to grow by more than 50% by 2050. [ 2 ] [ 3 ] At the same time, agricultural activity already puts significant stress on the quantity and quality of soils and freshwater resources worldwide. This leads to growing rates of land degradation [ 4 ] and ecosystem deterioration. [ 5 ] Long-term solutions to those challenges require integrated management of environmental resources which can be also referred to as a Nexus Approach . This approach focuses on the synergies between sectors in order to minimize trade-offs , increase resource use efficiency , and improve environmental policy & governance which is crucial for the achievement of the Sustainable Development Goals . For example, the sustainable management of wastewater in crop production is considered as an implementation of a Water-Soil-Waste Nexus . [ 6 ] Integrated resource management requires the use of integrated environmental modelling tools. However, there is no single model that can cover all complex interactions between natural resources such as water , soil , and waste . [ 1 ] Moreover, there is a poor awareness of the available models and their capabilities which results in a time-consuming process of developing new models instead of using or further expanding already existing ones. [ 7 ] With the objective of addressing the abovementioned problems, the Nexus Tools Platform has been created. It provides detailed visualizations of model descriptions and advanced filtering options using Kibana open-source visualization software. The platform does not attempt to include all possible models, but rather to provide a more efficient way to compare well-established ones and to keep the content updated through the feedback system with model users and developers . The first version of the website was published in April 2015 starting with the database of 60 models from around the world. [ 8 ] [ 9 ] A year later, a paper was published in Environmental Modelling & Software journal explaining the NTP development, data collection methodology, and application examples. [ 1 ] In March 2018, NTP 2.0 was released with a greater number of available models (84), improved classification and visualization. [ 10 ] Through the advanced search and filter functions, users could differentiate modelling tools in terms of model types, popularity, considered processes, input and output parameters, application areas (countries), programming language, etc. Recently, the whole database was refreshed and one more model was added making 85 in total. The data for the platform is collected in two ways: The current list of models includes a wide array of research areas from hydrology and land management to soil and water quality topics from a local to global scale. Some of the most popular examples are SWAT , MODFLOW , HEC-RAS , and CESM . The NTP is not intended to promote any of the models listed on the website. The information, whether in graphical or text form, is presented for scientific and educational purposes only.
https://en.wikipedia.org/wiki/Nexus_Tools_Platform
A Nexus driver is a bus device driver that interfaces leaf drivers to a specific I/O bus and provides the low-level integration of this I/O bus. In some systems, for example Solaris , drivers are organized into a tree structure . A driver that provides services to other drivers below it in the tree is called, in Solaris terminology, a nexus driver. A tree node with no children, a leaf node , is called a leaf driver . [ 1 ] [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nexus_driver
The extensible NEXUS file format is widely used in phylogenetics , evolutionary biology, and bioinformatics . It stores information about taxa , morphological character states, DNA and protein sequence alignments, distances, and phylogenetic trees. [ 1 ] The NEXUS format also allows the storage of data that can facilitate analyses, such as sets of characters or taxa. Many popular phylogenetic programs, including PAUP* , [ 2 ] MrBayes , [ 3 ] Mesquite, [ 4 ] MacClade, [ 5 ] and SplitsTree , [ 6 ] use this format. Nexus file names typically have the extension .nxs or .nex . A NEXUS file is made out of a fixed header #NEXUS followed by multiple blocks. Each block starts with BEGIN block_name; and ends with END; . The keywords are case-insensitive. Comments are enclosed inside square brackets [...] . [ 7 ] Each of the pre-defined types of blocks may appear only once. The following example NEXUS uses the TAXA, CHARACTERS, and TREES blocks:
https://en.wikipedia.org/wiki/Nexus_file
The Nexus for Exoplanet System Science ( NExSS ) initiative is a National Aeronautics and Space Administration ( NASA ) virtual institute designed to foster interdisciplinary collaboration in the search for life on exoplanets . Led by the Ames Research Center , the NASA Exoplanet Science Institute , and the Goddard Institute for Space Studies , NExSS will help organize the search for life on exoplanets from participating research teams and acquire new knowledge about exoplanets and extrasolar planetary systems. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] In 1995, astronomers using ground-based observatories discovered 51 Pegasi b , the first exoplanet orbiting a Sun-like star . [ 7 ] NASA launched the Kepler space telescope in 2009 to search for Earth-size exoplanets. By 2015, they had confirmed more than a thousand exoplanets, [ note 1 ] while several thousand additional candidates awaited confirmation. [ 9 ] To help coordinate efforts to sift through and understand the data, NASA needed a way for researchers to collaborate across disciplines. The success of the Virtual Planetary Laboratory research network at the University of Washington led Mary A. Voytek , director of the NASA Astrobiology Program , to model its structure and create the Nexus for Exoplanet System Science (NExSS) initiative. [ 1 ] [ 10 ] Leaders from three NASA research centers will run the program: Natalie Batalha of NASA's Ames Research Center, Dawn Gelino of the NASA Exoplanet Science Institute, and Anthony Del Genio of NASA's Goddard Institute for Space Studies. [ 11 ] Functioning as a virtual institute, NExSS is currently composed of sixteen interdisciplinary science teams from ten universities, three NASA centers and two research institutes, who will work together to search for habitable exoplanets that can support life. [ 12 ] The US teams were initially selected from a total of about 200 proposals; however, the coalition is expected to expand nationally and internationally as the project gets underway. [ 13 ] Teams will also work with amateur citizen scientists who will have the ability to access the public Kepler data and search for exoplanets. [ 11 ] NExSS will draw from scientific expertise in each of the four divisions of the Science Mission Directorate : Earth science , planetary science , heliophysics and astrophysics . [ 2 ] NExSS research will directly contribute to understanding and interpreting future exoplanet data from the upcoming launches of the Transiting Exoplanet Survey Satellite and James Webb Space Telescope , as well as the planned Nancy Grace Roman Space Telescope mission. [ 2 ] Current NExSS research projects as of 2015: [ 2 ]
https://en.wikipedia.org/wiki/Nexus_for_Exoplanet_System_Science
The Neyer d-optimal method or Neyer d-optimal test is a sensitivity test method. [ 1 ] It can be used to answer questions such as "How far can a carton of eggs fall, on average, before one breaks?" If these egg cartons are very expensive, the person running the test would like to minimize the number of cartons dropped, to keep the experiment cheaper and to perform it faster. The Neyer test allows the experimenter to choose the experiment that gives the most information. In this case, given the history of egg cartons which have already been dropped, and whether those cartons broke or not, the Neyer test says "you will learn the most if you drop the next egg carton from a height of 32.123 meters." The Neyer test is useful in any situation when you wish to determine the average amount of a given stimulus needed in order to trigger a response. Examples: The Neyer-d optimal test was described by Barry T. Neyer in 1994. This method has replaced the earlier Bruceton analysis or "Up and Down Test" that was devised by Dixon and Mood in 1948 to allow computation with pencil and paper. Samples are tested at various stimulus levels, and the results (response or no response) noted. The Neyer Test guides the experimenter to pick test levels that provide the maximum amount of information. Unlike previous methods that have been developed, this method requires the use of a computer program to calculate the test levels. Although not directly related to the test method, the likelihood ratio analysis method is often used to analyze the results of tests conducted with the Neyer D-Optimal test. The combined test and analysis methods are commonly known as the Neyer Test. Dror and Steinberg (2008) suggest another experimental design method which is more efficient than Neyer's, by enabling the usage of a D-optimal design criterion from the outset of the experiment. Furthermore, their method is extended to deal with situations which are not handled by previous algorithms, including extension from fully sequential designs (updating the plan after each observation) to group-sequential designs (any partition of the experiment to blocks of numerous observations), from a binary response ("success" or "failure") to any generalized linear model, and from the univariate case to the treatment of multiple predictors (such as designing an experiment to test a response in a medical treatment where the experimenters changes doses of two different drugs).
https://en.wikipedia.org/wiki/Neyer_d-optimal_test
In statistics , the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a uniformly most powerful test in certain contexts. It was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. [ 1 ] The Neyman–Pearson lemma is part of the Neyman–Pearson theory of statistical testing, which introduced concepts such as errors of the second kind , power function , and inductive behavior. [ 2 ] [ 3 ] [ 4 ] The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman–Pearsonian flavor of statistical testing allows investigating the two types of errors . The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all α {\displaystyle \alpha } level tests while subsequently minimizing type II error, traditionally denoted by β {\displaystyle \beta } . Their seminal paper of 1933, including the Neyman–Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error ( α {\displaystyle \alpha } ), but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman–Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios. Consider a test with hypotheses H 0 : θ = θ 0 {\displaystyle H_{0}:\theta =\theta _{0}} and H 1 : θ = θ 1 {\displaystyle H_{1}:\theta =\theta _{1}} , where the probability density function (or probability mass function ) is ρ ( x ∣ θ i ) {\displaystyle \rho (x\mid \theta _{i})} for i = 0 , 1 {\displaystyle i=0,1} . For any hypothesis test with rejection set R {\displaystyle R} , and any α ∈ [ 0 , 1 ] {\displaystyle \alpha \in [0,1]} , we say that it satisfies condition P α {\displaystyle P_{\alpha }} if For any α ∈ [ 0 , 1 ] {\displaystyle \alpha \in [0,1]} , let the set of level α {\displaystyle \alpha } tests be the set of all hypothesis tests with size at most α {\displaystyle \alpha } . That is, letting its rejection set be R {\displaystyle R} , we have Pr θ 0 ( X ∈ R ) ≤ α {\displaystyle {\Pr }_{\theta _{0}}(X\in R)\leq \alpha } . Neyman–Pearson lemma [ 5 ] — Existence: If a hypothesis test satisfies P α {\displaystyle P_{\alpha }} condition, then it is a uniformly most powerful (UMP) test in the set of level α {\displaystyle \alpha } tests. Uniqueness: If there exists a hypothesis test R N P {\displaystyle R_{NP}} that satisfies P α {\displaystyle P_{\alpha }} condition, with η > 0 {\displaystyle \eta >0} , then every UMP test R {\displaystyle R} in the set of level α {\displaystyle \alpha } tests satisfies P α {\displaystyle P_{\alpha }} condition with the same η {\displaystyle \eta } . Further, the R N P {\displaystyle R_{NP}} test and the R {\displaystyle R} test agree with probability 1 {\displaystyle 1} whether θ = θ 0 {\displaystyle \theta =\theta _{0}} or θ = θ 1 {\displaystyle \theta =\theta _{1}} . In practice, the likelihood ratio is often used directly to construct tests — see likelihood-ratio test . However it can also be used to suggest particular test-statistics that might be of interest or to suggest simplified tests — for this, one considers algebraic manipulation of the ratio to see if there are key statistics in it related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one). Given any hypothesis test with rejection set R {\displaystyle R} , define its statistical power function β R ( θ ) = Pr θ ( X ∈ R ) {\displaystyle \beta _{R}(\theta )={\Pr }_{\theta }(X\in R)} . Existence: Given some hypothesis test that satisfies P α {\displaystyle P_{\alpha }} condition, call its rejection region R N P {\displaystyle R_{NP}} (where NP stands for Neyman–Pearson). For any level α {\displaystyle \alpha } hypothesis test with rejection region R {\displaystyle R} we have [ 1 R N P ( x ) − 1 R ( x ) ] [ ρ ( x ∣ θ 1 ) − η ρ ( x ∣ θ 0 ) ] ≥ 0 {\displaystyle [1_{R_{NP}}(x)-1_{R}(x)][\rho (x\mid \theta _{1})-\eta \rho (x\mid \theta _{0})]\geq 0} except on some ignorable set A {\displaystyle A} . Then integrate it over x {\displaystyle x} to obtain 0 ≤ [ β R N P ( θ 1 ) − β R ( θ 1 ) ] − η [ β R N P ( θ 0 ) − β R ( θ 0 ) ] . {\displaystyle 0\leq [\beta _{R_{NP}}(\theta _{1})-\beta _{R}(\theta _{1})]-\eta [\beta _{R_{NP}}(\theta _{0})-\beta _{R}(\theta _{0})].} Since β R N P ( θ 0 ) = α {\displaystyle \beta _{R_{NP}}(\theta _{0})=\alpha } and β R ( θ 0 ) ≤ α {\displaystyle \beta _{R}(\theta _{0})\leq \alpha } , we find that β R N P ( θ 1 ) ≥ β R ( θ 1 ) {\displaystyle \beta _{R_{NP}}(\theta _{1})\geq \beta _{R}(\theta _{1})} . Thus the R N P {\displaystyle R_{NP}} rejection test is a UMP test in the set of level α {\displaystyle \alpha } tests. Uniqueness: For any other UMP level α {\displaystyle \alpha } test, with rejection region R {\displaystyle R} , we have from Existence part, [ β R N P ( θ 1 ) − β R ( θ 1 ) ] ≥ η [ β R N P ( θ 0 ) − β R ( θ 0 ) ] {\displaystyle [\beta _{R_{NP}}(\theta _{1})-\beta _{R}(\theta _{1})]\geq \eta [\beta _{R_{NP}}(\theta _{0})-\beta _{R}(\theta _{0})]} . Since the R {\displaystyle R} test is UMP, the left side must be zero. Since η > 0 {\displaystyle \eta >0} the right side gives β R ( θ 0 ) = β R N P ( θ 0 ) = α {\displaystyle \beta _{R}(\theta _{0})=\beta _{R_{NP}}(\theta _{0})=\alpha } , so the R {\displaystyle R} test has size α {\displaystyle \alpha } . Since the integrand [ 1 R N P ( x ) − 1 R ( x ) ] [ ρ ( x ∣ θ 1 ) − η ρ ( x ∣ θ 0 ) ] {\displaystyle [1_{R_{NP}}(x)-1_{R}(x)][\rho (x\mid \theta _{1})-\eta \rho (x\mid \theta _{0})]} is nonnegative, and integrates to zero, it must be exactly zero except on some ignorable set A {\displaystyle A} . Since the R N P {\displaystyle R_{NP}} test satisfies P α {\displaystyle P_{\alpha }} condition, let the ignorable set in the definition of P α {\displaystyle P_{\alpha }} condition be A N P {\displaystyle A_{NP}} . R ∖ ( R N P ∪ A N P ) {\displaystyle R\smallsetminus (R_{NP}\cup A_{NP})} is ignorable, since for all x ∈ R ∖ ( R N P ∪ A N P ) {\displaystyle x\in R\smallsetminus (R_{NP}\cup A_{NP})} , we have [ 1 R N P ( x ) − 1 R ( x ) ] [ ρ ( x ∣ θ 1 ) − η ρ ( x ∣ θ 0 ) ] = η ρ ( x ∣ θ 0 ) − ρ ( x ∣ θ 1 ) > 0 {\displaystyle [1_{R_{NP}}(x)-1_{R}(x)][\rho (x\mid \theta _{1})-\eta \rho (x\mid \theta _{0})]=\eta \rho (x\mid \theta _{0})-\rho (x\mid \theta _{1})>0} . Similarly, R N P ∖ ( R ∪ A N P ) {\displaystyle R_{NP}\smallsetminus (R\cup A_{NP})} is ignorable. Define A R := ( R Δ R N P ) ∪ A N P {\displaystyle A_{R}:=(R\mathbin {\Delta } R_{NP})\cup A_{NP}} (where Δ {\displaystyle \Delta } means symmetric difference ). It is the union of three ignorable sets, thus it is an ignorable set. Then we have x ∈ R ∖ A R ⟹ ρ ( x ∣ θ 1 ) > η ρ ( x ∣ θ 0 ) {\displaystyle x\in R\smallsetminus A_{R}\implies \rho (x\mid \theta _{1})>\eta \rho (x\mid \theta _{0})} and x ∈ R c ∖ A R ⟹ ρ ( x ∣ θ 1 ) < η ρ ( x ∣ θ 0 ) {\displaystyle x\in R^{c}\smallsetminus A_{R}\implies \rho (x\mid \theta _{1})<\eta \rho (x\mid \theta _{0})} . So the R {\displaystyle R} rejection test satisfies P α {\displaystyle P_{\alpha }} condition with the same η {\displaystyle \eta } . Since A R {\displaystyle A_{R}} is ignorable, its subset R Δ R N P ⊂ A R {\displaystyle R\mathbin {\Delta } R_{NP}\subset A_{R}} is also ignorable. Consequently, the two tests agree with probability 1 {\displaystyle 1} whether θ = θ 0 {\displaystyle \theta =\theta _{0}} or θ = θ 1 {\displaystyle \theta =\theta _{1}} . Let X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} be a random sample from the N ( μ , σ 2 ) {\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})} distribution where the mean μ {\displaystyle \mu } is known, and suppose that we wish to test for H 0 : σ 2 = σ 0 2 {\displaystyle H_{0}:\sigma ^{2}=\sigma _{0}^{2}} against H 1 : σ 2 = σ 1 2 {\displaystyle H_{1}:\sigma ^{2}=\sigma _{1}^{2}} . The likelihood for this set of normally distributed data is We can compute the likelihood ratio to find the key statistic in this test and its effect on the test's outcome: This ratio only depends on the data through ∑ i = 1 n ( x i − μ ) 2 {\displaystyle \sum _{i=1}^{n}(x_{i}-\mu )^{2}} . Therefore, by the Neyman–Pearson lemma, the most powerful test of this type of hypothesis for this data will depend only on ∑ i = 1 n ( x i − μ ) 2 {\displaystyle \sum _{i=1}^{n}(x_{i}-\mu )^{2}} . Also, by inspection, we can see that if σ 1 2 > σ 0 2 {\displaystyle \sigma _{1}^{2}>\sigma _{0}^{2}} , then Λ ( x ) {\displaystyle \Lambda (\mathbf {x} )} is a decreasing function of ∑ i = 1 n ( x i − μ ) 2 {\displaystyle \sum _{i=1}^{n}(x_{i}-\mu )^{2}} . So we should reject H 0 {\displaystyle H_{0}} if ∑ i = 1 n ( x i − μ ) 2 {\displaystyle \sum _{i=1}^{n}(x_{i}-\mu )^{2}} is sufficiently large. The rejection threshold depends on the size of the test. In this example, the test statistic can be shown to be a scaled chi-square distributed random variable and an exact critical value can be obtained. A variant of the Neyman–Pearson lemma has found an application in the seemingly unrelated domain of the economics of land value. One of the fundamental problems in consumer theory is calculating the demand function of the consumer given the prices. In particular, given a heterogeneous land-estate, a price measure over the land, and a subjective utility measure over the land, the consumer's problem is to calculate the best land parcel that they can buy – i.e. the land parcel with the largest utility, whose price is at most their budget. It turns out that this problem is very similar to the problem of finding the most powerful statistical test, and so the Neyman–Pearson lemma can be used. [ 6 ] The Neyman–Pearson lemma is quite useful in electronics engineering , namely in the design and use of radar systems, digital communication systems , and in signal processing systems. In radar systems, the Neyman–Pearson lemma is used in first setting the rate of missed detections to a desired (low) level, and then minimizing the rate of false alarms , or vice versa. Neither false alarms nor missed detections can be set at arbitrarily low rates, including zero. All of the above goes also for many systems in signal processing. The Neyman–Pearson lemma is applied to the construction of analysis-specific likelihood-ratios, used to e.g. test for signatures of new physics against the nominal Standard Model prediction in proton–proton collision datasets collected at the LHC . [ 7 ] Neyman wrote about the discovery of the lemma as follows. [ 8 ] Paragraph breaks have been inserted. I can point to the particular moment when I understood how to formulate the undogmatic problem of the most powerful test of a simple statistical hypothesis against a fixed simple alternative. At the present time [probably 1968], the problem appears entirely trivial and within easy reach of a beginning undergraduate. But, with a degree of embarrassment, I must confess that it took something like half a decade of combined effort of E. S. P. [Egon Pearson] and myself to put things straight. The solution of the particular question mentioned came on an evening when I was sitting alone in my room at the Statistical Laboratory of the School of Agriculture in Warsaw, thinking hard on something that should have been obvious long before. The building was locked up and, at about 8 p.m., I heard voices outside calling me. This was my wife, with some friends, telling me that it was time to go to a movie. My first reaction was that of annoyance. And then, as I got up from my desk to answer the call, I suddenly understood: for any given critical region and for any given alternative hypothesis, it is possible to calculate the probability of the error of the second kind; it is represented by this particular integral. Once this is done, the optimal critical region would be the one which minimizes this same integral, subject to the side condition concerned with the probability of the error of the first kind. We are faced with a particular problem of the calculus of variation, probably a simple problem. These thoughts came in a flash, before I reached the window to signal to my wife. The incident is clear in my memory, but I have no recollections about the movie we saw. It may have been Buster Keaton.
https://en.wikipedia.org/wiki/Neyman–Pearson_lemma
Nickel carbonyl ( IUPAC name: tetracarbonylnickel ) is a nickel(0) organometallic compound with the formula Ni(CO) 4 . This colorless liquid is the principal carbonyl of nickel . It is an intermediate in the Mond process for producing very high-purity nickel and a reagent in organometallic chemistry , although the Mond Process has fallen out of common usage due to the health hazards in working with the compound. Nickel carbonyl is one of the most dangerous substances yet encountered in nickel chemistry due to its very high toxicity , compounded with high volatility and rapid skin absorption. [ 4 ] In nickel tetracarbonyl, the oxidation state for nickel is assigned as zero, because the Ni−C bonding electrons come from the C atom and are still assigned to C in the hypothetical ionic bond which determines the oxidation states. The formula conforms to the 18-electron rule . The molecule is tetrahedral , with four carbonyl ( carbon monoxide ) ligands . Electron diffraction studies have been performed on this molecule, and the Ni−C and C−O distances have been calculated to be 1.838(2) and 1.141(2) angstroms respectively. [ 5 ] Ni(CO) 4 was first synthesised in 1890 by Ludwig Mond by the direct reaction of nickel metal with carbon monoxide. [ 6 ] This pioneering work foreshadowed the existence of many other metal carbonyl compounds, including those of vanadium , chromium , manganese , iron , and cobalt . It was also applied industrially to the purification of nickel by the end of the 19th century. [ 7 ] At 323 K (50 °C; 122 °F), carbon monoxide is passed over impure nickel. The optimal rate occurs at 130 °C. [ 8 ] Ni(CO) 4 is not readily available commercially. It is conveniently generated in the laboratory by carbonylation of commercially available bis(cyclooctadiene)nickel(0) . [ 9 ] It can also be prepared by reduction of ammoniacal solutions of nickel sulfate with sodium dithionite under an atmosphere of CO. [ 10 ] On moderate heating, Ni(CO) 4 decomposes to carbon monoxide and nickel metal. Combined with the easy formation from CO and even very impure nickel, this decomposition is the basis for the Mond process for the purification of nickel or plating onto surfaces. Thermal decomposition commences near 180 °C (356 °F) and increases at higher temperature. [ 8 ] Like other low-valent metal carbonyls, Ni(CO) 4 is susceptible to attack by nucleophiles. Attack can occur at nickel center, resulting in displacement of CO ligands, or at CO. Thus, donor ligands such as triphenylphosphine react to give Ni(CO) 3 (PPh 3 ) and Ni(CO) 2 (PPh 3 ) 2 . Bipyridine and related ligands behave similarly. [ 11 ] The monosubstitution of nickel tetracarbonyl with other ligands can be used to determine the Tolman electronic parameter , a measure of the electron donating or withdrawing ability of a given ligand. Treatment with hydroxides gives clusters such as [Ni 5 (CO) 12 ] 2− and [Ni 6 (CO) 12 ] 2− . These compounds can also be obtained by reduction of nickel carbonyl. Thus, treatment of Ni(CO) 4 with carbon nucleophiles (Nu − ) results in acyl derivatives such as [Ni(CO) 3 C(O)Nu)] − . [ 12 ] Nickel carbonyl can be oxidized . Chlorine oxidizes nickel carbonyl into NiCl 2 , releasing CO gas. Other halogens behave analogously. This reaction provides a convenient method for precipitating the nickel portion of the toxic compound. Reactions of Ni(CO) 4 with alkyl and aryl halides often result in carbonylated organic products. Vinylic halides, such as PhCH=CHBr, are converted to the unsaturated esters upon treatment with Ni(CO) 4 followed by sodium methoxide. Such reactions also probably proceed via oxidative addition . Allylic halides give the π-allylnickel compounds, such as (allyl) 2 Ni 2 Cl 2 : [ 13 ] 2 Ni(CO) 4 + 2 ClCH 2 CH=CH 2 → Ni 2 ( μ -Cl) 2 ( η 3 - C 3 H 5 ) 2 + 8 CO The hazards of Ni(CO) 4 are far greater than that implied by its CO content, reflecting the effects of the nickel if released in the body. Nickel carbonyl may be fatal if absorbed through the skin or more likely, inhaled due to its high volatility. Its LC 50 for a 30-minute exposure has been estimated at 3 ppm , and the concentration that is immediately fatal to humans would be 30 ppm. Some subjects exposed to puffs up to 5 ppm described the odour as musty or sooty, but because the compound is so exceedingly toxic, its smell provides no reliable warning against a potentially fatal exposure. [ 14 ] The vapours of Ni(CO) 4 can autoignite . The vapor decomposes quickly in air, with a half-life of about 40 seconds. [ 15 ] Nickel carbonyl poisoning is characterized by a two-stage illness. The first consists of headaches and chest pain lasting a few hours, usually followed by a short remission. The second phase is a chemical pneumonitis which starts after typically 16 hours with symptoms of cough, breathlessness and extreme fatigue. These reach greatest severity after four days, possibly resulting in death from cardiorespiratory or acute kidney injury . Convalescence is often extremely protracted, often complicated by exhaustion, depression and dyspnea on exertion. Permanent respiratory damage is unusual. The carcinogenicity of Ni(CO) 4 is a matter of debate, but is presumed to be significant. It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. [ 16 ] "Requiem for the Living" (1978), an episode of Quincy, M.E. , features a poisoned, dying crime lord who asks Dr. Quincy to autopsy his still-living body. Quincy identifies the poison—nickel carbonyl. In the 1979 novella Amanda Morgan by Gordon R. Dickson , the remaining inhabitants of a mostly evacuated village resist an occupying military force by directing the exhaust from a poorly-tuned internal combustion engine onto a continually renewed "waste heap" of powdered nickel outside a machine shop (under the guise of civilian business) in order to eliminate the occupiers, at the cost of their own lives. In chapter 199 of the manga Dr. Stone , a machine is made that purifies nickel via the Mond Process . It is mentioned that the process creates a "fatal toxin" (nickel carbonyl). In the 2019 novel Delta-v from New York Times bestselling author Daniel Suarez a team of eight private miners reach a near-earth asteroid to extract volatiles (water, CO 2 , etc.) and metals (iron, nickel and cobalt); these are stored as solid carbonyl for transfer back to near Earth orbit , and used for in-situ fabrication of a spacecraft, via decomposition in vacuum.
https://en.wikipedia.org/wiki/Ni(CO)4
Nickel (II) nitrate is the inorganic compound Ni(NO 3 ) 2 or any hydrate thereof. In the hexahydrate, the nitrate anions are not bonded to nickel. Other hydrates have also been reported: Ni(NO 3 ) 2 . 9H 2 O, Ni(NO 3 ) 2 . 4H 2 O, and Ni(NO 3 ) 2 . 2H 2 O. [ 3 ] It is prepared by the reaction of nickel oxide with nitric acid: The anhydrous nickel nitrate is typically not prepared by heating the hydrates. Rather it is generated by the reaction of hydrates with dinitrogen pentoxide or of nickel carbonyl with dinitrogen tetroxide : [ 3 ] The hydrated nitrate is often used as a precursor to supported nickel catalysts. [ 3 ] Nickel(II) compounds with oxygenated ligands often feature octahedral coordination geometry. Two polymorphs of the tetrahydrate Ni(NO 3 ) 2 . 4H 2 O have been crystallized. In one the monodentate nitrate ligands are trans [ 4 ] while in the other they are cis. [ 5 ] Nickel(II) nitrate is primarily used in electrotyping and electroplating of metallic nickel. In heterogeneous catalysis, nickel(II) nitrate is used to impregnate alumina . Pyrolysis of the resulting material gives forms of Raney nickel and Urushibara nickel . [ 6 ] In homogeneous catalysis , the hexahydrate is a precatalyst for cross coupling reactions . [ 7 ]
https://en.wikipedia.org/wiki/Ni(NO3)2
Nickel (III) oxide is the inorganic compound with the formula Ni 2 O 3 . It is not well characterized, [ 1 ] and is sometimes referred to as black nickel oxide . Traces of Ni 2 O 3 on nickel surfaces have been mentioned. [ 2 ] [ 3 ] Nickel (III) oxide has been studied theoretically since the early 1930s, [ 4 ] supporting its unstable nature at standard temperatures. A nanostructured pure phase of the material was synthesized and stabilized for the first time in 2015 from the reaction of nickel(II) nitrate with sodium hypochlorite and characterized using powder X-ray diffraction and electron microscopy. [ 5 ] This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ni2O3
Nickel aluminide refers to either of two widely used intermetallic compounds, Ni 3 Al or NiAl, but the term is sometimes used to refer to any nickel–aluminium alloy. These alloys are widely used because of their high strength even at high temperature, low density, corrosion resistance, and ease of production. [ 1 ] Ni 3 Al is of specific interest as a precipitate in nickel-based superalloys , where it is called the γ' (gamma prime) phase. It gives these alloys high strength and creep resistance up to 0.7–0.8 of its melting temperature. [ 1 ] [ 2 ] Meanwhile, NiAl displays excellent properties such as lower density and higher melting temperature than those of Ni 3 Al, and good thermal conductivity and oxidation resistance. [ 2 ] These properties make it attractive for special high-temperature applications like coatings on blades in gas turbines and jet engines . However, both these alloys have the disadvantage of being quite brittle at room temperature, with Ni 3 Al remaining brittle at high temperatures as well. [ 1 ] To address this problem, has been shown that Ni 3 Al can be made ductile when manufactured in single-crystal form rather than in polycrystalline form. [ 3 ] An important disadvantage of polycrystalline Ni 3 Al-based alloys are their room-temperature and high-temperature brittleness, which interferes with potential structural applications. This brittleness is generally attributed to the inability of dislocations to move in the highly ordered lattices. [ 5 ] The introduction of small amount of boron can drastically increase the ductility by suppressing intergranular fracture. [ 6 ] Ni-based superalloys derive their strength from the formation of γ' precipitates (Ni 3 Al) in the γ phase (Ni) which strengthen the alloys through precipitation hardening . In these alloys the volume fraction of the γ' precipitates is as high as 80%. [ 7 ] Because of this high volume fraction, the evolution of these γ' precipitates during the alloys' life cycles is important: a major concern is the coarsening of these γ' precipitates at high temperature (800 to 1000 °C), which greatly reduces the alloys' strength. [ 7 ] This coarsening is due to the balance between interfacial and elastic energy in the γ + γ' phase and is generally inevitable over long durations of time. [ 7 ] This coarsening problem is addressed by introducing other elements such as Fe, Cr and Mo, which generate multiphase configurations that can significantly increase the creep resistance. [ 8 ] This creep resistance is attributed to the formation of inhomogeneous precipitate Cr 4.6 MoNi 2.1 , which pins dislocations and prevents further coarsening of the γ' phase. [ 8 ] The addition of Fe and Cr also drastically increases the weldability of the alloy. [ 8 ] Despite its beneficial properties, NiAl generally suffers from two factors: very high brittleness at low temperatures (<330 °C (626 °F)) and rapid loss of strength for temperatures higher than 550 °C (1,022 °F). [ 9 ] The brittleness is attributed to both the high energy of anti-phase boundaries as well as high atomic order along grain boundaries. [ 9 ] Similar to that of Ni 3 Al-based alloys these issues are generally addressed via the integration of other elements. Attempted elements can be broken into three groups depending on their influence of microstructure: Some of the more successful elements have been shown to be Fe, Co and Cr which drastically increase room temperature ductility as well as hot workability. [ 10 ] This increase is due to the formation of γ phase which modifies the β phase grains. [ 10 ] Alloying with Fe, Ga and Mo has also been shown to drastically improve room temperature ductility as well. [ 11 ] Most recently, refracturing metals such as Cr, W and Mo have been added and resulted in not only increases in room temperature ductility but also increases in strength and fracture toughness at high temperatures. [ 12 ] This is due to the formation of unique microstructures such as the eutectic alloy Ni 45.5 Al 9 Mo and α-Cr inclusions that contribute to solid solution hardening. [ 12 ] It is even being shown that these complex alloys (Ni 42 Al 51 Cr 3 Mo 4 ) have the potential to be fabricated via additive manufacturing processes such as selective laser manufacturing , vastly increasing the potential applications for these alloys. [ 12 ] In nickel-based superalloys, regions of Ni 3 Al (called γ' phase) precipitate out of the nickel-rich matrix (called γ phase) to give high strength and creep resistance. Many alloy formulations are available and they usually include other elements, such as chromium, molybdenum, and iron, in order to improve various properties. An alloy of Ni 3 Al, known as IC-221M, is made up of nickel aluminide combined with several other metals including chromium , molybdenum , zirconium and boron . Adding boron increases the ductility of the alloy by positively altering the grain boundary chemistry and promoting grain refinement. The Hall-Petch parameters for this material were σ o = 163 MPa and k y = 8.2 MPaˑcm 1/2 . [ 13 ] Boron increases the hardness of bulk Ni 3 Al by a similar mechanism. This alloy is extremely strong for its weight, five times stronger than common SAE 304 stainless steel . Unlike most alloys, IC-221M increases in strength from room temperature up to 800 °C (1,470 °F). The alloy is very resistant to heat and corrosion , and finds use in heat-treating furnaces and other applications where its longer lifespan and reduced corrosion give it an advantage over stainless steel . [ 14 ] It has been found that the microstructure of this alloy includes Ni 5 Zr eutectic phase and therefore solution treatment is effective for hot working without cracking. [ 15 ]
https://en.wikipedia.org/wiki/Ni3Al
Heazlewoodite , Ni 3 S 2 , is a rare sulfur -poor nickel sulfide mineral found in serpentinitized dunite . It occurs as disseminations and masses of opaque, metallic light bronze to brassy yellow grains which crystallize in the trigonal crystal system. It has a hardness of 4, a specific gravity of 5.82. Heazlewoodite was first described in 1896 from Heazlewood, Tasmania , Australia. [ 4 ] Heazlewoodite is formed within terrestrial rocks by metamorphism of peridotite and dunite via a process of nucleation. Heazlewoodite is the least sulfur saturated of nickel sulfide minerals and is only formed via metamorphic exsolution of sulfur from the lattice of metamorphic olivine . Heazlewoodite is thought to form from sulfur and nickel which exist in pristine olivine in trace amounts, and which are driven out of the olivine during metamorphic processes. Magmatic olivine generally has up to ~4000 ppm Ni and up to 2500 ppm S within the crystal lattice, as contaminants and substituting for other transition metals with similar ionic radii ( Fe 2+ and Mg 2+ ). During metamorphism, sulfur and nickel within the olivine lattice are reconstituted into metamorphic sulfide minerals, chiefly millerite , during serpentinization and talc carbonate alteration. When metamorphic olivine is produced, the propensity for this mineral to resorb sulfur, and for the sulfur to be removed via the concomitant loss of volatiles from the serpentinite, tends to lower sulfur fugacity . In this environment, nickel sulfide mineralogy converts to the lowest-sulfur state available, which is heazlewoodite. Heazlewoodite is known from few ultramafic intrusions within terrestrial rocks. The Honeymoon Well ultramafic intrusive, Western Australia is known to contain heazlewoodite-millerite sulfide assemblages within serpentinized olivine adcumulate dunite, formed from the metamorphic process. The mineral is also reported, again in association with millerite, from the ultramafic rocks of New Caledonia . This mineral has been found in meteorites [ 5 ] including irons [ 6 ] and CV carbonaceous chondrites . [ 7 ]
https://en.wikipedia.org/wiki/Ni3S2
Argon compounds , the chemical compounds that contain the element argon , are rarely encountered due to the inertness of the argon atom. However, compounds of argon have been detected in inert gas matrix isolation, cold gases, and plasmas, and molecular ions containing argon have been made and also detected in space. One solid interstitial compound of argon, Ar 1 C 60 is stable at room temperature. Ar 1 C 60 was discovered by the CSIRO . Argon ionises at 15.76 eV, which is higher than hydrogen, but lower than helium, neon or fluorine. [ 1 ] Molecules containing argon can be van der Waals molecules held together very weakly by London dispersion forces . Ionic molecules can be bound by charge induced dipole interactions. With gold atoms there can be some covalent interaction. [ 2 ] Several boron-argon bonds with significant covalent interactions have been also reported. [ 3 ] [ 4 ] Experimental methods used to study argon compounds have included inert gas matrices , infrared spectroscopy to study stretching and bending movements , microwave spectroscopy and far infrared to study rotation, and also visible and ultraviolet spectroscopy to study different electronic configurations including excimers . Mass spectroscopy is used to study ions. [ 5 ] Computation methods have been used to theoretically compute molecule parameters, and predict new stable molecules. Computational ab initio methods used have included CCSD(T) , MP2 ( Møller–Plesset perturbation theory of the second order), CIS and CISD . For heavy atoms, effective core potentials are used to model the inner electrons, so that their contributions do not have to be individually computed. More powerful computers since the 1990s have made this kind of in silico study much more popular, being much less risky and simpler than an actual experiment. [ 5 ] This article is mostly based on experimental or observational results. The argon fluoride laser is important in photolithography of silicon chips. These lasers make a strong ultraviolet emission at 192 nm. [ 6 ] Argonium (ArH + ) is an ion combining a proton and an argon atom. It is found in interstellar space in diffuse atomic hydrogen gas where the fraction of molecular hydrogen H 2 is in the range of 0.0001 to 0.001. [ 1 ] Argonium is formed when H 2 + reacts with Ar atoms: [ 1 ] and it is also produced from Ar + ions produced by cosmic rays and X-rays from neutral argon: When ArH + encounters an electron, dissociative recombination can occur, but it is extremely slow for lower energy electrons, allowing ArH + to survive for a much longer time than many other similar protonated cations. Artificial ArH + made from earthly Ar contains mostly the isotope 40 Ar rather than the cosmically abundant 36 Ar. Artificially it is made by an electric discharge through an argon-hydrogen mixture. [ 8 ] In the Crab Nebula , ArH + occurs in several spots revealed by emission lines . The strongest place is in the Southern Filament. This is also the place with the strongest concentration of Ar + and Ar 2+ ions. [ 7 ] The column density of ArH + in the Crab Nebula is between 10 12 and 10 13 atoms per square centimeter. [ 7 ] Possibly the energy required to excite the ions so that then can emit, comes from collisions with electrons or hydrogen molecules. [ 7 ] Towards the Milky Way centre the column density of ArH + is around 2 × 10 13 cm −2 . [ 1 ] The diargon cation, Ar + 2 has a binding energy of 1.29 eV. [ 9 ] The triargon cation Ar + 3 is linear, but has one Ar−Ar bond shorter than the other. Bond lengths are 2.47 and 2.73 ångströms . The dissociation energy to Ar and Ar 2 + is 0.2 eV. In line with the molecule's asymmetry, the charge is calculated as +0.10, +0.58 and +0.32 on each argon atom, so that it greatly resembles Ar + 2 bound to a neutral Ar atom. [ 10 ] Larger charged argon clusters are also detectable in mass spectroscopy. The tetraargon cation is also linear. Ar + 13 icosahedral clusters have an Ar + 3 core, whereas Ar + 19 is dioctahedral with an Ar + 4 core. The linear Ar + 4 core has +0.1 charge on the outer atoms, and +0.4 charge on each or the inner atoms. For larger charged argon clusters, the charge is not distributed on more than four atoms. Instead the neutral outer atoms are attracted by induced electric polarization. [ 11 ] The charged argon clusters absorb radiation, from the near infrared, through visible to ultraviolet. The charge core, Ar + 2 , Ar + 3 or Ar + 4 is called a chromophore . Its spectrum is modified by the first shell of neutral atoms attached. Larger clusters have the same spectrum as the smaller ones. When photons are absorbed in the chromophore , it is initially electronically excited , but then energy is transferred to the whole cluster in the form of vibration . Excess energy is removed by outer atoms evaporating from the cluster one at a time. The process of destroying a cluster by light is called photofragmentation . [ 11 ] Negatively-charged argon clusters are thermodynamically unstable, and therefore cannot exist. Argon has a negative electron affinity . [ 11 ] Neutral argon hydride, also known as argon monohydride (ArH), was the first discovered noble gas hydride. J. W. C. Johns discovered an emission line of ArH at 767 nm and announced the find in 1970. The molecule was synthesized using X-ray irradiation of mixtures of argon with hydrogen-rich molecules such as H 2 , H 2 O , CH 4 and CH 3 OH . [ 12 ] The X-ray excited argon atoms are in the 4p state. [ 13 ] Argon monohydride is unstable in its ground state, 4s, as a neutral inert gas atom and a hydrogen atom repel each other at normal intermolecular distances. When a higher-energy-level ArH* emits a photon and reaches the ground state, the atoms are too close to each other, and they repel and break up. However a van der Waals molecule can exist with a long bond. [ 14 ] However, excited ArH* can form stable Rydberg molecules , also known as excimers . These Rydberg molecules can be considered as a protonated argon core, surrounded by an electron in one of many possible higher energy states. [ 15 ] Instead of dihydrogen, other hydrogen containing molecules can also have a hydrogen atom abstracted by excited argon, but note that some molecules bind hydrogen too strongly for the reaction to proceed. For example, acetylene will not form ArH this way. [ 12 ] In the van der Waals molecule of ArH, the bond length is calculated to be about 3.6 Å and the dissociation energy calculated to be 0.404 kJ/mol (33.8 cm −1 ). [ 16 ] The bond length in ArH* is calculated as 1.302 Å. [ 17 ] The spectrum of argon monohydride, both ArH* and Ar D *, has been studied. The lowest bound state is termed A 2 Σ + or 5s. Another low lying state is known as 4p, made up of C 2 Σ + and B 2 π states. Each transition to or from higher level states corresponds to a band. Known bands are 3p → 5s, 4p → 5s, 5p → 5s (band origin 17 486 .527 cm −1 [ 18 ] ), 6p → 5s (band origin 21 676 .90 cm −1 [ 18 ] ) 3dσ → 4p, 3dπ → 4p (6900 cm −1 ), 3dδ → 4p (8200–8800 cm −1 ), 4dσ → 4p ( 15 075 cm −1 ), 6s → 4p (7400–7950 cm −1 ), 7s → 4p (predicted at 13 970 cm −1 , but obscured), 8s → 4p ( 16 750 cm −1 ), 5dπ → 4p ( 16 460 cm −1 ), 5p → 6s (band origin 3681.171 cm −1 ), [ 19 ] 4f → 5s ( 20 682 .17 and 20 640 .90 cm −1 band origin for ArD and ArH), 4f → 3dπ (7548.76 and 7626.58 ccm −1 ), 4f → 3dδ (6038.47 and 6026.57 cm −1 ), 4f → 3dσ (4351.44 cm −1 for ArD). [ 14 ] The transitions going to 5s, 3dπ → 5s and 5dπ → 5s, are strongly predissociated , blurring out the lines. [ 19 ] In the UV spectrum a continuous band exists from 200 to 400 nm. This band is due to two different higher states: B 2 Π → A 2 Σ + radiates over 210–450 nm, and E 2 Π → A 2 Σ + is between 180 and 320 nm. [ 20 ] A band in the near infrared from 760 to 780 nm. [ 21 ] Other ways to make ArH include a Penning -type discharge tube, or other electric discharges. Yet another way is to create a beam of ArH + (argonium) ions and then neutralize them in laser-energized caesium vapour. By using a beam, the lifetimes of the different energy states can be observed, by measuring the profile of electromagnetic energy emitted at different wavelengths. [ 22 ] The E 2 π state of ArH has a radiative lifetime of 40 ns. For ArD the lifetime is 61 ns. The B 2 Π state has a lifetime of 16.6 ns in ArH and 17 ns in ArD. [ 20 ] The argon dihydrogen cation ArH + 2 has been predicted to exist and to be detectable in the interstellar medium . However it has not been detected as of 2021 [update] . [ 23 ] ArH + 2 is predicted to be linear in the form Ar−H−H. The H−H distance is 0.94 Å. The dissociation barrier is only 2 kcal/mol (8 kJ/mol), and ArH + 2 readily loses a hydrogen atom to yield ArH + . [ 24 ] The force constant of the ArH bond in this is 1.895 m dyne /Å 2 ( 1.895 × 10 12 Pa ). [ 25 ] The argon trihydrogen cation ArH + 3 has been observed in the laboratory. [ 23 ] [ 26 ] ArH 2 D + , ArHD + 2 and ArD + 3 have also been observed. [ 27 ] The argon trihydrogen cation is planar in shape, with an argon atom off the vertex of a triangle of hydrogen atoms. [ 28 ] The argoxonium ion ArOH + is predicted to be bent molecular geometry in the 1 1 A′ state. 3 Σ − is a triplet state 0.12 eV higher in energy, and 3 A″ is a triplet state 0.18 eV higher. The Ar−O bond is predicted to be 1.684 Å long [ 23 ] and to have a force constant of 2.988 mdyne/Å 2 ( 2.988 × 10 12 Pa ). [ 25 ] ArNH + is a possible ionic molecule to detect in the lab, and in space, as the atoms that compose it are common. ArNH + is predicted to be more weakly bound than ArOH + , with a force constant in the Ar−N bond of 1.866 mdyne/Å 2 ( 1.866 × 10 12 Pa ). The angle at the nitrogen atom is predicted to be 97.116°. The Ar−N lengths should be 1.836 Å and the N−H bond length would be 1.046 Å [ 25 ] [ 29 ] The argon dinitrogen linear cationic complex has also been detected in the lab: The dissociation yields Ar + , as this is a higher-energy state. [ 9 ] The binding energy is 1.19 eV. [ 9 ] The molecule is linear. The distance between two nitrogen atoms is 1.1 Å. This distance is similar to that of neutral N 2 rather than that of N + 2 ion. The distance between one nitrogen and the argon atom is 2.2 Å. [ 9 ] The vibrational band origin for the nitrogen bond in ArN + 2 ( V = 0 → 1) is at 2272.2564 cm −1 compared with N 2 + at 2175 and N 2 at 2330 cm −1 . [ 9 ] In the process of photodissociation , it is three times more likely to yield Ar + + N 2 compared to Ar + N + 2 . [ 30 ] ArHN + 2 has been produced in a supersonic jet expansion of gas and detected by Fourier transform microwave spectroscopy . [ 26 ] The molecule is linear, with the atoms in the order Ar−H−N−N. The Ar−H distance is 1.864 Å. There is a stronger bond between hydrogen and argon than in ArHCO + . [ 31 ] The molecule is made by the following reaction: The argon ion can bond two molecules of dinitrogen (N 2 ) to yield an ionic complex with a linear shape and structure N=N− + Ar −N=N. The N=N bond length is 1.1014 Å, and the nitrogen to argon bond length is 2.3602 Å. 1.7 eV of energy is required to break this apart to N 2 and ArN + 2 . The band origin of an infrared band due to antisymmetric vibration of the N=N bonds is at 2288.7272 cm −1 . Compared to N 2 it is redshifted 41.99 cm −1 . The ground state rotational constant of the molecule is 0.034 296 cm −1 . [ 30 ] Ar(N 2 ) + 2 is produced by a supersonic expansion of a 10:1 mixture of argon with nitrogen through a nozzle, which is impacted by an electron beam . [ 30 ] ArN 2 O + absorbs photons in four violet–ultraviolet wavelength bands leading to breakup of the molecule. The bands are 445–420, 415–390, 390–370, and 342 nm. [ 32 ] [ 33 ] ArHCO + has been produced in a supersonic-jet expansion of gas and detected by Fabry–Perot-type Fourier transform microwave spectroscopy. [ 26 ] [ 34 ] The molecule is made by this reaction ArH + + CO → ArHCO + . [ 31 ] BO + forms four complexes with argon: ArBO + ; two isomers of Ar 2 BO + (one with equidistant Ar-B bonds and another with a short and long bond); and Ar 3 BO + . These ions were formed by firing a green laser at a boron target in a gaseous mixture of helium, argon and nitrous oxide. [ 35 ] ArCO + 2 can be excited to form ArCO + 2 * where the positive charge is moved from the carbon dioxide part to the argon. This molecule may occur in the upper atmosphere. Experimentally the molecule is made from a low-pressure argon gas with 0.1% carbon dioxide , irradiated by a 150 V electron beam . Argon is ionized, and can transfer the charge to a carbon dioxide molecule. [ 36 ] The dissociation energy of ArCO + 2 is 0.26 eV. [ 36 ] ArCO + 2 + CO 2 → Ar + CO 2 ·CO + 2 (yields 0.435 eV.) [ 36 ] Neutral argon atoms bind very weakly to other neutral atoms or molecules to form van der Waals molecules . These can be made by expanding argon under high pressure mixed with the atoms of another element. The expansion happens through a tiny hole into a vacuum, and results in cooling to temperatures a few degrees above absolute zero. At higher temperatures the atoms will be too energetic to stay together by way of the weak London dispersion forces . The atoms that are to combine with argon can be produced by evaporation with a laser or alternatively by an electric discharge. The known molecules include AgAr, Ag 2 Ar, NaAr, KAr, MgAr, CaAr, SrAr, ZnAr, CdAr, HgAr, SiAr, [ 37 ] InAr, CAr, [ 38 ] GeAr, [ 39 ] SnAr, [ 40 ] and BAr. [ 41 ] SiAr was made from silicon atoms derived from Si(CH 3 ) 4 . [ 42 ] In addition to the very weakly bound van der Waals molecules, electronically excited molecules with the same formula exist. As a formula these can be written ArX*, with the "*" indicating an excited state . The atoms are much more strongly bound with a covalent bond. They can be modeled as an ArX + surrounded by a higher energy shell with one electron. This outer electron can change energy by exchanging photons and so can fluoresce. The widely used argon fluoride laser makes use of the ArF* excimer to produce strong ultraviolet radiation at 192 nm. The argon chloride laser using ArCl* produces even shorter ultraviolet at 175 nm, but is too feeble for application. [ 43 ] The argon chloride in this laser comes from argon and chlorine molecules. [ 44 ] Cooled argon gas can form clusters of atoms. Diargon , also known as the argon dimer, has a binding energy of 0.012 eV, but the Ar 13 and Ar 19 clusters have a sublimation energy (per atom) of 0.06 eV. For liquid argon, which could be written as Ar ∞ , the energy increases to 0.08 eV. Clusters of up to several hundred argon atoms have been detected. These argon clusters are icosahedral in shape, consisting of shells of atoms arranged around a central atom. The structure changes for clusters with more than 800 atoms to resemble a tiny crystal with a face-centered cubic (fcc) structure, as in solid argon. It is the surface energy that maintains an icosahedral shape, but for larger clusters internal pressure will attract the atoms into an fcc arrangement. [ 11 ] Neutral argon clusters are transparent to visible light. [ 11 ] ArO* is also formed when dioxygen trapped in an argon matrix is subjected to vacuum ultraviolet . It can be detected by its luminescence: Light emitted by ArO* has two main bands, one at 2.215 eV, and a weaker one at 2.195 eV. [ 51 ] Argon sulfide, ArS* luminesces in the near infrared at 1.62 eV. ArS is made from UV irradiated OCS in an argon matrix. The excited states lasts for 7.4 and 3.5 μs for spectrum peak and band respectively. [ 52 ] Cluster molecules containing dichlorine and more than one argon atom can be made by forcing a 95:5 mixture of helium and argon and a trace of chlorine though a nozzle. ArCl 2 exists in a T shape. Ar 2 Cl 2 has a distorted tetrahedron shape, with the two argon atoms 4.1 Å from each other, and their axis 3.9 Å from the Cl 2 . The van der Waals bond energy is 447 cm −1 . Ar 3 Cl 2 also exists with a van der Waals bond energy of 776 cm −1 . [ 53 ] The linear Ar·Br 2 molecule has a continuous spectrum for bromine molecule X → B transitions. The spectrum of bromine is blue-shifted and spread out when it binds an argon atom. [ 54 ] ArI 2 shows a spectrum that adds satellite bands to the higher vibrational bands of I 2 . [ 55 ] The ArI 2 molecule has two different isomers, one shape is linear, and the other is T-shaped. The dynamics of ArI 2 is complex. Breakup occurs through different routes in the two isomers. The T shape undergoes intramolecular vibrational relaxation, whereas the linear one directly breaks apart. [ 56 ] Diiodine clusters, I 2 Ar n have been made. [ 57 ] The ArClF cluster has a linear shape. [ 58 ] The argon atom is closest to the chlorine atom. [ 54 ] Linear ArBrCl can also rearrange to ArClBr, or a T-shaped isomer. [ 59 ] Multiple argon atoms can " solvate " a water molecule forming a monolayer around the H 2 O. Ar 12 ·H 2 O is particularly stable, having an icosahedral shape. Molecules from Ar·H 2 O to Ar 14 ·H 2 O have been studied. [ 60 ] ArBH was produced from boron monohydride (BH) which in turn was created from diborane by way of an ultraviolet 193 nm laser. The BH-argon mixture was expanded through a 0.2 mm diameter nozzle into a vacuum. The gas mixture cools and Ar and BH combine to yield ArBH. A band spectrum that combines the A 1 Π←X 1 Σ + electronic transition, with vibration and rotation can be observed. The BH has singlet spin, and this is the first known van der Waals complex with a singlet spin pair of atoms. For this molecule the rotational constant is 0.133 cm −1 , The dissociation energy is 92 cm −1 and distance from argon to boron atom is 3.70 Å. [ 61 ] ArAlH is also known to exist. [ 62 ] MgAr 2 is also known. [ 48 ] Some linear polyatomic molecules can form T-shaped van der Waals complexes with argon. These include NCCN , carbon dioxide , nitrous oxide , acetylene , carbon oxysulfide , and ClCN . Others attach the argon atom at one end to continue to be linear, including HCN . [ 63 ] Other polyatomic van der Waals compounds of argon, include those of fluorobenzene , [ 64 ] formyl radical (ArHCO), [ 65 ] 7-azaindole , [ 66 ] glyoxal , [ 67 ] sodium chloride (ArNaCl), [ 68 ] ArHCl, [ 69 ] and cyclopentanone . [ 70 ] Argon dissolved in water causes the pH to rise to 8.0, [ 82 ] apparently by reducing the number of oxygen atoms available to bind protons. [ 83 ] With ice, argon forms a clathrate hydrate . Up to 0.6 GPa, the clathrate has a cubic structure. Between 0.7 and 1.1 GPa the clathrate has a tetragonal structure. Between 1.1 and 6.0 GPa the structure is body centered orthorhombic. Over 6.1 GPa, the clathrate converts into solid argon and ice VII . [ 84 ] At atmospheric pressure the clathrate is stable below 147 K. [ 85 ] At 295 K the argon pressure from the clathrate is 108 MPa. [ 86 ] Argon fluorohydride was an important discovery in the rejuvenation of the study of noble gas chemistry. HArF is stable in solid form at temperatures below 17 K. [ 87 ] It is prepared by photolysis of hydrogen fluoride in a solid argon matrix. [ 88 ] HArArF would have such a low barrier to decomposition that it will likely never be observed. [ 89 ] However HBeArF is predicted to be more stable than HArF. [ 90 ] CUO in a solid argon matrix can bind one, or a few argon atoms to yield CUO·Ar, CUO·Ar 3 or CUO·Ar 4 . CUO itself is made by evaporating uranium atoms into carbon monoxide . Uranium acts as a strong Lewis acid in CUO [ 88 ] [ 91 ] and forms bonds with energies of about 3.2 kcal/mol (13.4 kJ/mol) with argon. The argon acts as a Lewis base . Its electron density is inserted into an empty 6d orbital on the uranium atom. The spectrum of CUO is changed by argon so that the U−O stretch frequency changes from 872.2 to 804.3 cm −1 and the U−C stretch frequency from 1047.3 to 852.5 cm −1 . [ 92 ] The significant change in the spectrum occurs because the CUO is changed from a singlet state (in gas phase or solid neon) to a triplet state, with argon or noble gas complexing. [ 93 ] The argon–uranium bond length is 3.16 Å. [ 92 ] This is shorter than the sum of atomic radii of U and Ar of 3.25 Å, but considerably longer than a normal covalent bond to uranium. For example, U−Cl in UCl 6 is 2.49 Å. [ 93 ] When xenon is included in the solid argon matrix up to a few percent, additional van der Waals molecules are formed: CUO·Ar 3 Xe, CUO·Ar 2 Xe 2 , CUO·ArXe 3 and CUO·Xe 4 . [ 91 ] Similarly krypton can substitute for argon in CUO·Ar 3 Kr, CUO·Ar 2 Kr 2 , CUO·ArKr 3 and CUO·Kr 4 . [ 93 ] The shape of these molecules is roughly octahedral , with a uranium centre and with the noble gas atoms around the equator. [ 93 ] UO + 2 can bind up to five noble gas atoms in a ring around a linear O= + U =O core. [ 94 ] These molecules are produced when uranium metal is laser ablated into dioxygen. This produces UO, UO 2 , UO 3 , U + , and importantly UO + 2 . UO + 2 is then condensed into a noble gas matrix, either a pure element or a mixture. Heavier noble gas atoms will tend to displace the lighter atoms. Ionic molecules produced this way include UO 2 Ne 4 Ar + , UO 2 Ne 3 Ar + 2 , UO 2 Ne 2 Ar + 3 , UO 2 NeAr + 4 , UO 2 Ar + 5 , UO 2 Ar 4 Kr + , UO 2 Ar 3 Kr + 2 , UO 2 Ar 2 Kr + 3 , UO 2 ArKr + 4 , UO 2 Ar 4 Xe + , UO 2 Ar 3 Xe + 2 , UO 2 Ar 2 Xe + 3 , and UO 2 ArXe + 4 , which are identified by a shift in the U=O antisymmetric stretching frequency. [ 94 ] Neutral UO 2 condensed in solid argon is converted from one electronic state to another by the argon atom ligands. In argon the electron configuration is 5f 2 (δφ) whereas in neon it is 5f 1 7s 1 (the state 3 H 4g compared to 3 Φ 2u ). This is because the argon atoms have a larger antibonding interaction with the 7s 1 electron, forcing it into a different subshell. The argonated compound has a stretching frequency of 776 cm −1 compared to 914.8 cm −1 in neon . [ 95 ] The argon uranium dioxide molecule is likely UO 2 Ar 5 . [ 96 ] When beryllium atoms react with oxygen in a solid argon matrix (or beryllia is evaporated into the matrix) ArBeO is formed, and is observable by its infrared spectrum. The beryllia molecule is strongly polarised, and the argon atom is attracted to the beryllium atom. [ 93 ] [ 97 ] The bond strength of Ar−Be is calculated to be 6.7 kcal/mol (28 kJ/mol). The Ar−Be bond length is predicted to be 2.042 Å. [ 98 ] The cyclic Be 2 O 2 molecule can bind two argon atoms, or one argon along with another noble gas atom. [ 99 ] Analogously, beryllium reacting with hydrogen sulfide and trapped in an argon matrix at 4 K forms ArBeS. It has a binding energy calculated to be 12.8 kcal/mol (54 kJ/mol). [ 100 ] ArBeO 2 CO (beryllium carbonate) has been prepared (along with Ne, Kr and Xe adducts). [ 101 ] The cyclic beryllium sulfite molecule can also coordinate an argon atom onto the beryllium atom in solid neon or argon matrix. [ 102 ] Group 6 elements can form reactive penta carbonyls that can react with argon. These were actually argon compounds discovered in 1975, and were known before the discovery of HArF, but are usually overlooked. [ 103 ] Tungsten normally forms a hexacarbonyl , but when subject to ultraviolet radiation it breaks into a reactive pentacarbonyl. When this is condensed into a noble gas matrix the infrared and UV spectrum varies considerably depending on the noble gas used. This is because the noble gas present binds to the vacant position on the tungsten atom. Similar results also occur with molybdenum and chromium . [ 104 ] Argon is only very weakly bound to tungsten in ArW(CO) 5 . [ 93 ] [ 105 ] The Ar−W bondlength is predicted to be 2.852 Å. [ 104 ] The same substance is produced for a brief time in supercritical argon at 21 °C. [ 106 ] For ArCr(CO) 5 the band maximum is at 533 nm (compared to 624 nm in neon , and 518 nm in krypton ). Forming 18-electron complexes, the shift in spectrum due to different matrices was much smaller, only around 5 nm. This clearly indicates the formation of a molecule using atoms from the matrix. [ 5 ] Other carbonyls and complexed carbonyls also have reports of bonding to argon. These include Ru(CO) 2 (PMe 3 ) 2 Ar, Ru(CO) 2 ( dmpe ) 2 Ar, η 6 -C 6 H 6 Cr(CO) 2 Ar. [ 107 ] Evidence also exists for ArHMn(CO) 4 , ArCH 3 Mn(CO) 4 , and fac -( η 2 -dfepe)Cr(CO) 3 Ar. [ 5 ] Other noble gas complexes have been studied by photolysis of carbonyls dissolved in liquid rare gas, possibly under pressure. These Kr or Xe complexes decay on the time scale of seconds, but argon does not seem to have been studied this way. The advantage of liquid noble gases is that the medium is completely transparent to infrared radiation, which is needed to study the bond vibration in the solute. [ 5 ] Attempts have been made to study carbonyl–argon adducts in the gas phase, but the interaction appears to be too weak to observe a spectrum. In the gas form, the absorption lines are broadened into bands because of rotation that happens freely in a gas. [ 5 ] The argon adducts in liquids or gases are unstable as the molecules easily react with the other photolysis products, or dimerize , eliminating argon. [ 5 ] The argon coinage metal monohalides were the first noble gas metal halides discovered, when the metal monohalide molecules were put through an argon jet. There were first found in Vancouver in 2000. [ 108 ] ArMX with M = Cu , Ag or Au and X = F , [ 109 ] Cl or Br have been prepared. The molecules are linear. In ArAuCl the Ar−Au bond is 2.47 Å, the stretching frequency is 198 cm −1 and the dissociation energy is 47 kJ/mol. [ 110 ] ArAgBr also has been made. [ 110 ] ArAgF has a dissociation energy of 21 kJ/mol. [ 110 ] The Ar−Ag bond-length in these molecules is 2.6 Å. [ 110 ] ArAgCl is isoelectronic with AgCl − 2 which is better known. [ 110 ] The Ar−Cu bond length in these molecules is 2.25 Å. [ 110 ] In a solid argon matrix VO 2 forms VO 2 Ar 2 , and VO 4 forms VO 4 ·Ar with binding energy calculated to be 12.8 and 5.0 kcal/mol (53 and 21 kJ/mol). [ 111 ] Scandium in the form of ScO + coordinates five argon atoms to yield ScOAr + 5 . [ 112 ] these argon atoms can be substituted by numbers of krypton or xenon atoms to yield even more mixed noble gas molecules. With yttrium , YO + bonds six argon atoms, and these too can be substituted by varying numbers of krypton or xenon atoms. [ 113 ] In the case of transition metal monoxides, ScO, TiO and VO do not form a molecule with one argon atom. However CrO, MnO, FeO, CoO and NiO can each coordinate one argon atom in a solid argon matrix. [ 114 ] The metal monoxide molecules can be produced by laser ablation of the metal trioxide, followed by condensation on solid argon. ArCrO absorbs at 846.3 cm −1 , ArMnO at 833.1, ArFeO at 872.8, ArCoO at 846.2, Ar 58 NiO at 825.7 and Ar 60 NiO at 822.8 cm −1 . All these molecules are linear. [ 114 ] There are also claims of argon forming coordination molecules in NbO 2 Ar 2 , NbO 4 Ar, TaO 4 Ar, [ 115 ] VO 2 Ar 2 , VO 4 Ar, [ 111 ] Rh( η 2 -O 2 )Ar 2 , Rh( η 2 -O 2 ) 2 Ar 2 , Rh( η 2 -O 2 ) 2 ( η 1 -OO)Ar. [ 116 ] [ 117 ] [ 118 ] Tungsten trioxide , WO 3 , and tungsten dioxide mono-superoxide (η 2 -O 2 )WO 2 can both coordinate argon in an argon matrix. The argon can be replaced by xenon or molecular oxygen to make xenon coordinated compounds or superoxides. For WO 3 Ar the binding energy is 9.4 kcal/mol and for (η 2 -O 2 )WO 2 it is 8.1 kcal/mol. [ 119 ] ArNiN 2 binds argon with 11.52 kcal/mol. The bending frequency of ArN 2 is changed from 310.7 to 358.7 cm −1 when argon attaches to the nickel atom. [ 120 ] Some other binary ions observed that contain argon include BaAr 2+ and BaAr 2+ 2 , [ 121 ] VAr + , CrAr + , FeAr + , CoAr + , and NiAr + . [ 5 ] Gold and silver cluster ions can bind argon. Known ions are Au 3 Ar + , Au 3 Ar + 2 , Au 3 Ar + 3 , Au 2 AgAr + 3 and AuAg 2 Ar + 3 . These have a triangular shaped metallic core with argon bound at the vertexes. [ 2 ] ArF + is also known [ 5 ] to be formed in the reaction and also and also The ions can be produced by ultraviolet light at 79.1 nm or less. [ 123 ] The ionisation energy of fluorine is higher than that of argon, so breakup occurs thus: The millimeter wave spectrum of ArF + between 119.0232 and 505.3155 GHz has been measured to calculate molecular constants B 0 = 14.878 8204 GHz , D 0 = 28.718 kHz. [ 125 ] There is a possibility that a solid salt of ArF + could be prepared with SbF − 6 or AuF − 6 anions. [ 124 ] [ 126 ] Excited or ionized argon atoms can react with molecular iodine gas to yield ArI + [ 127 ] Argon plasma is used as an ionisation source and carrier gas in inductively coupled plasma mass spectrometry . This plasma reacts with samples to produce monatomic ions, but also forms argon oxide (ArO + ), and argon nitride (ArN + ) cations, which can cause isobaric interference with detection and measurement of iron-56 ( 56 Fe) and iron-54 ( 54 Fe), respectively, in mass spectrometry. [ 128 ] Platinum present in stainless steel can form platinum argide (PtAr + ) which interferes with the detection of uranium-234 which can be used as a tracer in aquifers. [ 129 ] Argon chloride cations can interfere with the detection of arsenic as Ar 35 Cl + has a mass-to-charge ratio almost identical to that of arsenic's one stable isotope , 75 As. [ 130 ] In these circumstances ArO + may be removed by reaction with NH 3 . [ 131 ] Alternatively electrothermal vaporization or using helium gas can avoid these interference problems. [ 128 ] Argon can also form an anion with chlorine, ArCl − , [ 132 ] though this is not a problem for mass spectrometry applications as only cations are detected. The argon borynium ion, BAr + is produced when BBr + at energies between 9 and 11 eV reacts with argon atoms. 90% of the positive charge is on the argon atom. [ 133 ] ArC + ions can be formed when argon ions impact carbon monoxide with energies between 21 and 60 eV. However more C + ions are formed, and when the energy is on the high side, O + is higher. [ 134 ] ArN + can form when argon ions impact dinitrogen with energies between 8.2 and 41.2 eV and peaking around 35 eV. However far more N + 2 and N + are produced. [ 135 ] ArXe + is held together with a strength of 1445 cm −1 when it is in the X electronic state, but 1013 cm −1 when it is in the B excited state. [ 33 ] Metal–argon cations are called "argides". The argide ions produced during mass spectroscopy have higher intensity when the binding energy of the ion is higher. Transition elements have higher binding and ion flux intensity compared to main group elements. Argides can be formed in the plasma by excited argon atoms reacting with another element atom, or by an argon atom binding with another ion: Doubly charged cations, called superelectrophiles , are capable of reacting with argon. Ions produced include ArCF 2+ 2 ArCH + 2 , ArBF + 2 and ArBF 2+ containing bonds between argon and carbon or boron. [ 137 ] Doubly ionised acetylene HCCH 2+ reacts inefficiently with argon to yield HCCAr 2+ . This product competes with the formation of Ar + and argonium. [ 138 ] The SiF 2+ 3 ion reacts with argon to yield ArSiF 2+ 2 . [ 139 ] Metal ions can also form with more than one argon atom, in a kind of argon metal cluster. Different sized metal ions at the centre of a cluster can fit different geometries of argons atoms around the ion. [ 150 ] Argides with multiple argon atoms have been detected in mass spectrometry. These can have variable numbers of argon attached, but there are magic numbers, where the complex more commonly has a particular number, either four or six argon atoms. [ 151 ] These can be studied by time of flight mass spectrometer analysis and by the photodissociation spectrum . Other study methods include Coulomb explosion analysis. [ 152 ] Argon-tagging is a technique whereby argon atoms are weakly bound to a molecule under study. It results in a much lower temperature of the tagged molecules, with sharper infra-red absorption lines. The argon-tagged molecules can be disrupted by photons of a particular wavelength. [ 153 ] Lithium ions add argon atoms to form clusters with more than a hundred argon atoms. The clusters Li + Ar 4 , and Li + Ar 4 are particularly stable and common. Calculations show that the small clusters are all quite symmetrical. Li + Ar 2 is linear, Li + Ar 3 is flat and triangular shaped with D 3h symmetry, Li + Ar 4 is tetrahedral, Li + Ar 5 could be a square pyramid or trigonal bipyramid shape. Li + Ar 6 is an octahedron shape with Li at the centre. Li + Ar 7 or slightly larger clusters have a core octahedron of argon atoms with one or more triangular faces capped by other argon atoms. The bonding is much weaker, which explains their greater scarcity. [ 154 ] Sodium forms clusters with argon atoms with peaks at numbers of 8, 10, 16, 20, 23, 25 and 29, and also at the icosahedral numbers of 47, 50, 57, 60, 63, 77, 80, 116 and 147 argon atoms. This includes the square antiprism (8) and the capped square antiprism (10 atoms). [ 150 ] In Ti + Ar 1−n the argon atoms induce a mixing of the ground electronic state of 3d 2 4s 1 with 3d 3 4s 0 . When a plasma of titanium in expanding argon gas is made via a laser, clusters from Ti + Ar up to Ti + Ar 50 are formed. But Ti + Ar 6 is much more common than all the others. In this the six argon atoms are arranged in an octahedron shape around the central titanium ion. For Ti + Ar 2 DFT calculations predict it is linear, Ti + Ar 3 is not even flat, and has one short and two longer Ti-Ar bonds. Ti + Ar 4 is a distorted tetrahedron, with one longer Ti-Ar bond. Ti + Ar 5 is an asymmetrical trigonal bipyramid shape with one bond shorter. For clusters with seven or more argon atoms, the structure contains a Ti + Ar 6 octahedton with triangular faces caped by more argon atoms. [ 155 ] Cu + Ar 2 is predicted to be linear. Cu + Ar 3 is predicted to be planar T-shaped with an Ar-Cu-Ar angle of 93°. Cu + Ar 4 is predicted to be rhombic planar (not square or tetrahedral). For alkali and alkaline earth metals the M + Ar 4 cluster is tetrahedral. Cu + Ar 5 is predicted to have a rhombic pyramid shape. Cu + Ar 6 has a flattened octahedral shape. Cu + Ar 7 is much less stable, and the seventh argon atom is outside an inner shell of six argon atoms. This is called capped octahedral. A complete second shell of argon atoms yields Cu + Ar 34 . Above this number a structural change takes place with an icosahedral arrangement with Cu + Ar 55 and Cu + Ar 146 having more stability. [ 156 ] With a strontium ion Sr + from two to eight argon atoms can form clusters. Sr + Ar 2 has a triangle shape with C 2 v symmetry. Sr + Ar 3 has a trigonal pyramid shape with C 3 v symmetry. Sr + Ar 4 has two trigonal pyramids sharing a face and strontium at the common apex. It has a C 2 v symmetry. Sr + Ar 6 has a pentagonal pyramid of argon atoms with the strontium atom below the base. [ 157 ] Niobium tetraargide , Nb + Ar 4 probably has the argon atoms arranged in a square around the niobium. Similarly for vanadium tetraargide, V + Ar 4 . The hexaargides, Co + Ar 6 and Rh + Ar 6 likely have octahedral argon arrangement. [ 151 ] Indium monocation forms clusters with multiple argon, with magic numbers at 12, 18, 22, 25, 28, 45 and 54, and 70 argon atoms, which are numbers for icosahedral shapes. [ 150 ] By zapping copper metal with a UV laser in an argon-carbon monoxide mixture, argon tagged copper carbonyl cations are formed. These ions can be studied by observing which wavelengths of infrared radiation cause the molecules to break up. These molecular ions include CuCO + Ar, Cu(CO) 2 + Ar, Cu(CO) 3 + Ar, Cu(CO) 4 + Ar which are respectively disrupted to lose argon, by infrared wavenumbers 2216, 2221, 2205 and 2194 cm −1 respectively. The argon binding energy is respectively 16.3, 1.01, 0.97 and 0.23 kcal/mol. The infrared absorption peak for Cu(CO) 3 + Ar is 2205 cm −1 compared to 2199 cm −1 for Cu(CO) 3 + . For Cu(CO) 4 + Ar the peak is at 2198 cm −1 compared to 2193 for Cu(CO) 4 + . For Cu(CO) 2 + Ar the peak is at 2221 cm −1 compared to 2218.3 for argon free, and for CuCO + Ar the peak is at 2216 cm −1 considerably different to 2240.6 cm −1 for CuCO + . Computationally predicted shapes for these molecular ions are linear for CuCO + Ar, slightly bent T-shaped for Cu(CO) 2 + Ar and a trigonal pyramid with argon at the top and a flat star like copper tricarbonyl forming the base. [ 158 ] Ions studied by argon tagging include the hydrated proton H + (H 2 O) n Ar with n=2 to 5, [ 159 ] hydrated 18-crown-6 ether alkali metal ions, [ 160 ] hydrated alkali metal ions, [ 161 ] transition metal acetylene complexes, [ 162 ] protonated ethylene, [ 163 ] and IrO 4 + . [ 164 ] Argon methyl cations, (or methyliumargon) Ar x CH 3 + are known for n=1 to 8. CH 3 + is a Y shape, and when argon atoms are added they go above and below the plane of the Y. If more argon atoms are added they line up with the hydrogen atoms. Δ H 0 for ArCH 3 + is 11 kcal/mol, and for Ar 2 CH 3 + it is 13.5 kcal/mol (for 2Ar + CH 3 + ). [ 165 ] Boroxyl ring cationic complexes with argon [ArB 3 O 4 ] + , [ArB 3 O 5 ] + , [ArB 4 O 6 ] + and [ArB 5 O 7 ] + were prepared via a laser vaporization at cryogenic temperatures and investigated by infrared gas phase spectroscopy. [ 3 ] They were the first large stable gas phase complexes that feature strong dative bonding between argon and boron. Dications with argon are known for the coinage metals. Known dications include CuAr n 2+ and AgAr n 2+ for n=1-8, with a peak occurrence of CuAr 4 2+ , or AgAr 4 2+ , and AuAr n 2+ n=3–7. In addition to the four argon atoms, the six argon atoms clusters have enhanced concentration. The stability of the ions with two positive charges is unexpected as the ionization energy of argon is lower than the second ionization energy of the metal atom. So the positive second charge on the metal atom should move to the argon, ionizing it, and then forming a highly repulsive molecule that undergoes a Coulomb explosion. However these molecules appear to be kinetically stable, and to transfer the charge to an argon atom, they have to pass through a higher energy state. [ 166 ] The clusters with four argon atoms are expected to be square planar, and those with six, to be octahedral distorted by the Jahn–Teller effect . Examples of anions containing strong bonds with noble gases are extremely rare: generally nucleophilic nature of anions results in their inability to bind to noble gases with their negative electron affinity . However, the 2017 discovery of " superelectrophilic anions ", [ 167 ] gas phase fragmentation products of closo - dodecaborates , led to the observation of stable anionic compounds containing a boron-noble gas bond with significant degree of covalent interaction. The most reactive superelectrophilic anion [B 12 (CN) 11 ] − , fragmentation product of cyanated cluster [B 12 (CN) 12 ] 2- , was reported to bind argon spontaneously at room temperature. [ 4 ] Armand Gautier noticed that rock contained argon (and also nitrogen) that was liberated when the rock was dissolved in acid [ 168 ] however how the argon was combined in rock was ignored by the scientific community. [ 169 ] Solid buckminsterfullerene has small spaces between the C 60 balls. Under 200 MPa pressure and 200 °C heat for 12 hours, argon can be intercalated into the solid to form crystalline Ar 1 C 60 . Once this cools down it is stable at standard conditions for months. Argon atoms occupy octahedral interstitial sites. The crystalline lattice size is almost unchanged at room temperature, but is slightly larger than pure C 60 below 265 K. However argon does stop the buckyballs spinning below 250 K, a lower temperature than in pure C 60 . [ 170 ] Solid C 70 fullerene will also absorb argon under pressure of 200 MPa and at a temperature of 200 °C. C 70 ·Ar has argon in octahedral sites and has the rock salt structure, with cubic crystals in which the lattice parameter is 15.001 Å. This compares to the pure C 70 lattice parameter of 14.964 Å, so the argon forces the crystals to expand slightly. The C 70 ellipsoidal balls rotate freely in the solid, they are not locked into position by extra argon atoms filling the holes. Argon gradually escapes over a couple of days when the solid is stored at standard conditions, so that C 70 ·Ar is less stable than C 60 ·Ar. This is likely to be due to the shape and internal rotation allowing channels through which Ar atoms can move. [ 171 ] When fullerenes are dissolved and crystallized from toluene , solids may form with toluene included as part of the crystal. However, if this crystallization is performed under a high pressure argon atmosphere, toluene is not included, being replaced by argon. The argon is then removed from the resultant crystal by heating to produce unsolvated solid fullerene. [ 172 ] Argon forms a clathrate with hydroquinone (HOC 6 H 4 OH) 3 •Ar. [ 173 ] When crystallised from benzene under a pressure of 20 atmospheres of argon, a well defined structure containing argon results. [ 174 ] An argon- phenol clathrate 4C 6 H 5 OH•Ar is also known. It has a binding energy of 40 kJ/mol. [ 169 ] Other substituted phenols can also crystallise with argon. [ 173 ] The argon water clathrate is described in the Aqueous argon section. Argon difluoride, ArF 2 , is predicted to be stable at pressures over 57 GPa. It should be an electrical insulator. [ 175 ] At around 4 K there are two phases where neon and argon are mixed as a solid: Ne 2 Ar and Ar 2 Ne. [ 176 ] With Kr, solid argon forms a disorganized mixture. [ 177 ] Under high pressure stoichiometric solids are formed with hydrogen and oxygen: Ar(H 2 ) 2 and Ar(O 2 ) 3 . [ 178 ] Ar(H 2 ) 2 crystallises in the hexagonal C14 MgZn 2 Laves phase . It is stable to at least 200 GPa, but is predicted to change at 250 GPa to an AlB 2 structure. At even higher pressures the hydrogen molecules should break up followed by metallization. [ 178 ] Oxygen and argon under pressure at room temperature form several different alloys with different crystal structures. Argon atoms and oxygen molecules are similar in size, so that a greater range of miscibility occurs compared to other gas mixtures. Solid argon can dissolve up to 5% oxygen without changing structure. Below 50% oxygen a hexagonal close packed phase exists. This is stable from about 3GPa to 8.5 GPa. Typical formula is ArO. With more oxygen between 5.5 and 7 GPa, a cubic Pm 3 n structure exists, but under higher pressure it changes to a I 4 2 d space group form. With more than 8.5 GPa these alloys separate to solid argon and ε-oxygen. The cubic structure has a unit cell edge of 5.7828 Å at 6.9 GPa. The representative formula is Ar(O 2 ) 3 . [ 179 ] Using density-functional theory ArHe 2 is predicted to exist with the MgCu 2 Laves phase structure at high pressures below 13.8 GPa. Above 13.8 GPa it transforms to AlB 2 structure. [ 180 ] Under pressure argon inserts into zeolite . Argon has an atomic radius of 1.8 Å, so it can insert into pores if they are big enough. Each unit cell of the TON zeolite can contain up to 5 atoms of argon, compared to 12 of neon. Argon infused TON zeolite (Ar-TON) is more compressible than Ne-TON as the unoccupied pores become elliptical under increased pressure. When Ar-TON is brought to atmospheric pressure, the argon only desorbs slowly, so that some remains in the solid without external pressure for a day. [ 181 ] At 140 GPa and 1500K nickel and argon form an alloy, NiAr. [ 182 ] NiAr is stable at room temperature and a pressure as low as 99 GPa. It has a face-centred cubic (fcc) structure. The compound is metallic. Each nickel atom loses 0.2 electrons to an argon atom which is thereby an oxidant. This contrasts with Ni 3 Xe, in which nickel is the oxidant. The volume of the ArNi compound is 5% less than that of the separate elements at these pressures. If this compound exists in the core of the Earth it could explain why only half the argon-40 that should be produced during the radioactive decay that produces geothermal heating seems to exist on the Earth. [ 183 ] Organoargon chemistry describes the synthesis and properties of chemical compounds containing a carbon to argon chemical bond . Very few such compounds are known. The reaction of acetylene dications with argon produced HCCAr 2+ in 2008. [ 184 ] Reaction of the CF 2+ 3 dication with argon produced ArCF 2+ 2 : this reaction is unique to argon among the noble gases. [ 185 ] The compound FArCCH has been theoretically studied and is predicted to be stable. [ 186 ] FArCCF might also be stable enough to synthesise and detect, but probably not FArCCArF. [ 187 ] Calculations in 2015 suggest that FArCCH and FArCH 3 are stable, but not FArCN. [ 188 ] FArCC − should be kinetically stable, as is also expected of the krypton and xenon (but not helium) analogues. [ 189 ] HArC 4 H (for which the krypton analogue is known) and HArC 6 H have also been predicted as stable. [ 190 ] FArCO + and ClArCO + should be metastable and might be possible to characterise under cryogenic conditions. [ 191 ] Calculations suggest that HArCCF and HCCArF should be stable, and that HNgCCF molecules should be more stable than HNgCCH (Ng = Ar, Kr, Xe); the corresponding krypton species have been experimentally produced, but not the argon species despite an experimental attempt. HCCNgCN and HCCNgNC (Ng = Ar, Kr, Xe) are likewise computed to be stable, but experimental searches for them have failed. [ 192 ]
https://en.wikipedia.org/wiki/NiAr
Nickel(II) carbonate describes one or a mixture of inorganic compounds containing nickel and carbonate . From the industrial perspective, an important nickel carbonate is basic nickel carbonate with the formula Ni 4 CO 3 (OH) 6 (H 2 O) 4 . Simpler carbonates, ones more likely encountered in the laboratory, are NiCO 3 and its hexahydrate. All are paramagnetic green solids containing Ni 2+ cations. The basic carbonate is an intermediate in the hydrometallurgical purification of nickel from its ores and is used in electroplating of nickel. [ 3 ] The hexahydrate NiCO 3 •6H 2 O is claimed to form upon electrolysis of nickel metal under an atmosphere of carbon dioxide. Green and yellow forms of anhydrous NiCO 3 form when aqueous nickel chloride solutions are heated under high pressures of carbon dioxide. [ 4 ] NiCO 3 adopts a structure like calcite , consisting of nickel in an octahedral coordination geometry . [ 5 ] A pentahydrate has also been characterized by X-ray crystallography . Also known as the mineral hellyerite , the solid consists of [Ni 2 (CO 3 ) 2 (H 2 O) 8 ] subunits with an extra water of hydration . [ 6 ] Nickel carbonates are hydrolyzed upon contact with aqueous acids to give solutions containing the ion [Ni(H 2 O) 6 ] 2+ , liberating water and carbon dioxide in the process. Calcining (heating to drive off CO 2 and water) of these carbonates gives nickel(II) oxide : The nature of the resulting oxide depends on the nature of the precursor. The oxide obtained from the basic carbonate is often most useful for catalysis . Basic nickel carbonate can be made by treating solutions of nickel sulfate with sodium carbonate : The hydrated carbonate has been prepared by electrolytic oxidation of nickel in the presence of carbon dioxide: [ 7 ] Nickel carbonates are used in some ceramic applications and as precursors to catalysts . The natural nickel carbonate is hellyerite , mentioned above. Basic Ni carbonates also have some natural representatives. [ 8 ]
https://en.wikipedia.org/wiki/NiCO3
Nickel(II) chloride (or just nickel chloride ) is the chemical compound NiCl 2 . The anhydrous salt is yellow, but the more familiar hydrate NiCl 2 ·6H 2 O is green. Nickel(II) chloride, in various forms, is the most important source of nickel for chemical synthesis. The nickel chlorides are deliquescent , absorbing moisture from the air to form a solution. Nickel salts have been shown to be carcinogenic to the lungs and nasal passages in cases of long-term inhalation exposure . [ 4 ] Large scale production and uses of nickel chloride are associated with the purification of nickel from its ores. It is generated upon extraction nickel matte and residues obtained from roasting refining nickel-containing ores using hydrochloric acid. Electrolysis of nickel chloride solutions are used in the production of nickel metal. Other significant routes to nickel chloride arise from processing of ore concentrates such as various reactions involving copper chlorides: [ 5 ] Nickel chloride is not usually prepared in the laboratory because it is inexpensive and has a long shelf-life. The yellowish dihydrate, NiCl 2 ·2H 2 O, is produced by heating the hexahydrate between 66 and 133 °C. [ 6 ] The hydrates convert to the anhydrous form upon heating in thionyl chloride or by heating under a stream of HCl gas. Simply heating the hydrates does not afford the anhydrous dichloride. The dehydration is accompanied by a color change from green to yellow. [ 7 ] In case one needs a pure compound without presence of cobalt, nickel chloride can be obtained by cautiously heating hexaamminenickel chloride : [ 8 ] NiCl 2 adopts the CdCl 2 structure . [ 9 ] In this motif, each Ni 2+ center is coordinated to six Cl − centers, and each chloride is bonded to three Ni(II) centers. In NiCl 2 the Ni-Cl bonds have "ionic character". Yellow NiBr 2 and black NiI 2 adopt similar structures, but with a different packing of the halides, adopting the CdI 2 motif. In contrast, NiCl 2 ·6H 2 O consists of separated trans -[NiCl 2 (H 2 O) 4 ] molecules linked more weakly to adjacent water molecules. Only four of the six water molecules in the formula is bound to the nickel, and the remaining two are water of crystallization , so the formula of nickel(II) chloride hexahydrate is [NiCl 2 (H 2 O) 4 ]·2H 2 O. [ 9 ] Cobalt(II) chloride hexahydrate has a similar structure. The hexahydrate occurs in nature as the very rare mineral nickelbischofite. The dihydrate NiCl 2 ·2H 2 O adopts a structure intermediate between the hexahydrate and the anhydrous forms. It consists of infinite chains of NiCl 2 , wherein both chloride centers are bridging ligands . The trans sites on the octahedral centers occupied by aquo ligands . [ 10 ] A tetrahydrate NiCl 2 ·4H 2 O is also known. Nickel(II) chloride solutions are acidic, with a pH of around 4 due to the hydrolysis of the Ni 2+ ion. Most of the reactions ascribed to "nickel chloride" involve the hexahydrate, although specialized reactions require the anhydrous form. Reactions starting from NiCl 2 ·6H 2 O can be used to form a variety of nickel coordination complexes because the H 2 O ligands are rapidly displaced by ammonia , amines , thioethers , thiolates , and organo phosphines . In some derivatives, the chloride remains within the coordination sphere , whereas chloride is displaced with highly basic ligands. Illustrative complexes include: NiCl 2 is the precursor to acetylacetonate complexes Ni(acac) 2 (H 2 O) 2 and the benzene-soluble (Ni(acac) 2 ) 3 , which is a precursor to Ni(1,5-cyclooctadiene) 2 , an important reagent in organonickel chemistry. In the presence of water scavengers, hydrated nickel(II) chloride reacts with dimethoxyethane (dme) to form the molecular complex NiCl 2 (dme) 2 . [ 6 ] The dme ligands in this complex are labile. NiCl 2 and its hydrate are occasionally useful in organic synthesis . [ 14 ] NiCl 2 -dme (or NiCl 2 -glyme) is used due to its increased solubility in comparison to the hexahydrate. [ 15 ] Nickel(II) chloride is irritating upon ingestion, inhalation, skin contact, and eye contact. Prolonged inhalation exposure to nickel and its compounds has been linked to increased cancer risk to the lungs and nasal passages. [ 4 ]
https://en.wikipedia.org/wiki/NiCl2
Dichlorobis(triphenylphosphine)nickel(II) refers to a pair of metal phosphine complexes with the formula NiCl 2 [P(C 6 H 5 ) 3 ] 2 . The compound exists as two isomers, a paramagnetic dark blue solid and a diamagnetic red solid. These complexes function as catalysts for organic synthesis . [ 1 ] The blue isomer is prepared by treating hydrated nickel chloride with triphenylphosphine in alcohols or glacial acetic acid : [ 1 ] When allowed to crystallise from chlorinated solvents, the tetrahedral isomer converts to the square planar isomer. The square planar form is red and diamagnetic. The phosphine ligands are trans with respective Ni-P and Ni-Cl distances of 2.24 and 2.17 Å. [ 2 ] [ 3 ] The blue form is paramagnetic and features tetrahedral Ni(II) centers. In this isomer, the Ni-P and Ni-Cl distances are elongated at 2.32 and 2.21 Å. [ 4 ] [ 5 ] As illustrated by the title complexes, tetrahedral and square planar isomers coexist in solutions of various four-coordinated nickel(II) complexes. Weak field ligands, as judged by the spectrochemical series , favor tetrahedral geometry and strong field ligands favor the square planar isomer. Both weak field (Cl − ) and strong field (PPh 3 ) ligands comprise NiCl 2 (PPh 3 ) 2 , hence this compound is borderline between the two geometries. Steric effects also affect the equilibrium; larger ligands favoring the less crowded tetrahedral geometry. [ 6 ] The complex was first described by Walter Reppe who popularized its use in alkyne trimerisations and carbonylations . [ 7 ] Dichlorobis(triphenylphosphine)nickel(II) is a catalyst in Suzuki reactions , although usually inferior in terms of activity. [ 8 ]
https://en.wikipedia.org/wiki/NiCl2P2
Nickel(II) fluoride is the chemical compound with the formula NiF 2 . It is an ionic compound of nickel and fluorine and forms yellowish to green tetragonal crystals. Unlike many fluorides, NiF 2 is stable in air. Nickel(II) fluoride is also produced when nickel metal is exposed to fluorine. In fact, NiF 2 comprises the passivating surface that forms on nickel alloys (e.g. monel ) in the presence of hydrogen fluoride or elemental fluorine . For this reason, nickel and its alloys are suitable materials for storage and transport these fluorine and related fluorinating agents. NiF 2 is also used as a catalyst for the synthesis of chlorine pentafluoride . NiF 2 is prepared by treatment of anhydrous nickel(II) chloride with fluorine at 350 °C: [ 2 ] The corresponding reaction of cobalt(II) chloride results in oxidation of the cobalt , whereas nickel remains in the +2 oxidation state after fluorination because its +3 oxidation state is less stable. Chloride is more easily oxidized than nickel(II). This is a typical halogen displacement reaction, where a halogen plus a less active halide makes the less active halogen and the more active halide. Like some other metal difluorides, NiF 2 crystallizes in the rutile structure, which features octahedral Ni centers and planar fluorides. [ 3 ] At low temperatures, its magnetic structure is antiferromagnetic. [ 4 ] A melt of NiF 2 and KF reacts to give successively potassium trifluoronickelate and potassium tetrafluoronickelate : [ 5 ] The structure of this material is closely related to some superconducting oxide materials. [ 6 ] Nickel(II) fluoride reacts with strong bases to give nickel(II) hydroxide :
https://en.wikipedia.org/wiki/NiF2
Nickel selenide is the inorganic compound with the formula NiSe. As for many metal chalcogenides , the phase diagram for nickel(II) selenide is complicated. Two other selenides of nickel are known, NiSe 2 with a pyrite structure , and Ni 2 Se 3 . Additionally, NiSe is usually nonstoichiometric and is often described with the formula Ni 1−x Se, with 0 < x < 0.15. [ 1 ] This material is a semi-conducting solid, and can be obtained as in the form of a black fine powder, or silver if obtained in the form of larger crystals. Nickel(II) selenide is insoluble in all solvents, but can be degraded by strongly oxidizing acids. Typically, NiSe is prepared by high temperature reaction of the elements. Such reactions typically afford mixed phase products. Milder methods have also been described using more specialised techniques such as reactions of the elements in liquid ammonia in a pressure vessel. [ 2 ] Like many related materials, nickel(II) selenide adopts the nickel arsenide motif. In this structure, nickel is octahedral and the selenides are in trigonal prismatic sites. [ 3 ]
https://en.wikipedia.org/wiki/NiSe
Nickel monosilicide is an intermetallic compound formed out of nickel and silicon . Like other nickel silicides , NiSi is of importance in the area of microelectronics . Nickel monosilicide can be prepared by depositing a nickel layer on silicon and subsequent annealing . In the case of Ni films with thicknesses above 4 nm , the normal phase transition is given by Ni 2 Si at 250 °C followed by NiSi at 350 °C and NiSi 2 at approximately 800 °C. [ 4 ] For films with an initial Ni thickness below 4 nm a direct transition from orthorhombic Ni 2 Si to epitaxial NiSi 2−x , skipping the nickel monosilicide phase, is observed. [ 5 ] Several properties make NiSi an important local contact material in the area of microelectronics, among them a reduced thermal budget , low resistivity of 13–14 μΩ·cm and a reduced Si consumption when compared to alternative compounds. [ 6 ] This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/NiSi
Nickel(II) titanate , also known as nickel titanium oxide, is an inorganic compound with the chemical formula NiTiO 3 . [ 1 ] It is a coordination compound between nickel (II), titanium (IV) and oxide ions. It has the appearance of a yellow powder. Nickel(II) titanate has been used as a catalyst for toluene oxidation. [ 2 ] Nickel(II) titanate furthermore has many different names such as nickel titanium oxide ; titanium nickel oxide ; nickel titanium trioxide . [ 3 ] Nickel(II) titanate crystallizes at 600 °C [ 2 ] and is stable at room temperature and normal pressure in an ilmenite structure with rhombohedral R3 symmetry. [ 4 ] Nickel(II) titanate's rhombohedral structure has layers of Ni and Ti alternate along the rhombohedral axis with O layers between them. The XRD data supports nickel(II) titanate's ilmenite structure with its rhombohedral symmetry. [ 2 ] Other descriptions of nickel(II) titanite's Illemite structure consists of a pseudo close packed hexagonal array of O 2− ions with two thirds occupied by an ordered hexagonal like cation. [ 5 ] The Average crystallites size for nickel(II) titanate was estimated at 42 nm with lattice constants of a = 5.032 Å, b = 5.032 Å, c = 4.753 Å. [ 5 ] The structure was established by using X-ray power intensities. [ 4 ] There are several methods of synthesis for nickel(II) titanate. The first method involves nickel(II) titanate's melting temperature of over 500 °C at which its precursor decomposes to give nickel(II) titanate as a residue. [ 2 ] The second method involved using enthalpy and entropy on the reaction to synthesize nickel(II) titanate through its phase transition. [ 6 ] Nickel(II) titanate was synthesized using the polymeric precursor method. This involved spontaneous combustion of Ti(OCH(CH 3 ) 2 ) 4 with Ni(NO 3 ) 2 ·6H 2 O and C 3 H 7 NO 2 in a molar ratio of 1:1:20 in isopropyl alcohol solution. [ 2 ] The product of nickel(II) titanate was calcinated from the precursor at 600 °C for 3 hours. [ 2 ] Nickel(II) titanate was also formed by heating NiO and TiO 2 at 1350 °C for three hours. Then it was then cooled until room temperature. [ 6 ] NiO + TiO 2 + (heat) → NiTiO 3 A single-source heterobimetallic complex Ni 2 Ti 2 (OEt) 2 (μ-OEt) 6 (2,4-pentanedionate) 4 was synthesized and underwent thermal decomposition at 500 °C to give NiTiO 3 residue. [ 5 ] By doping the NiTiO 3 with Ga 2 O 3 , the anomalous increase of the electrical conductivity is shifted to lower temperatures. [ 6 ] Due to nickel(II) titanate's brilliant yellow color and high UV-vis-NIR reflectance, it has the potential to serve as a pigment for building coating. Ilmenite-type NiTiO 3 are well known as functional inorganic materials with wide application in electronic materials, including electrodes of solid fuel cells, gas sensors, chemical catalysts and so on due to their high static dielectric constants, weak magnetism and semiconductivity. [ 7 ] NiTiO 3 as a semiconductor has excellent catalytic activity due to its absorption bands. [ 7 ] Analysis of the band structures and density of states have implied that nickel(II) titanate has immense potential in the areas of high-density data storage, gas sensor data and integration in circuit devices. [ 7 ] NiTiO 3 has even been utilized as a catalyst in toluene oxidation. [ 2 ] It is used as a yellow pigment. MTiO 3 (M= Ni, Fe, Mn) compounds have received attention as possible candidates for multiferroic materials capable of magnetization through application of electric field. [ 8 ] Through an experiment to see if NiTiO 3 could serve as a catalyst for toluene oxidation in comparison to NiFe 2 O 4 , NiTiO 3 achieved greater results than its experimental counterpart in oxidating toluene. [ 2 ]
https://en.wikipedia.org/wiki/NiTiO3
Nickel tungstate is an inorganic compound of nickel , tungsten and oxygen , with the chemical formula of NiWO 4 . Nickel tungstate can be prepared by the reaction of nickel(II) nitrate and sodium tungstate : [ 5 ] Nickel tungstate can also be prepared by the reaction of nickel(II) oxide and tungsten(VI) oxide . [ 6 ] It can also be obtained by the reaction of ammonium metatungstate and nickel(II) nitrate [ 7 ] or from the reaction of sodium tungstate , nickel(II) chloride and sodium chloride . [ 8 ] Nickel tungstate undergoes a phase transition at 700°C. [ 5 ] Nickel tungstate is a light brown, odourless solid that is insoluble in water . [ 2 ] The amorphous form is green and the polycrystalline form is brown. [ 5 ] It crystallizes in the wolframite crystal structure of the monoclinic crystal system with space group P2/c (No. 13). [ 9 ] [ 8 ] The compound is antiferromagnetic . [ 10 ] [ 11 ] Nickel tungstate has no commercial uses. It has been examined as a photocatalyst , in humidity sensors, and in dielectric resonators. It is also considered as a "promising" cathode material for asymmetric supercapacitors . [ 1 ] [ 12 ] Nickel tungstate forms compounds with ammonia , such as NiWO 4 ·2NH 3 ·H 2 O which are cyan crystals, [ 13 ] NiWO 4 ·4NH 3 which are green crystals, [ 14 ] NiWO 4 ·5NH 3 ·H 2 O as dark blue crystals [ 13 ] or anhydrous NiWO 4 ·6NH 3 which is crystalline purple, while the octahydrate of hexamine is dark blue. [ 14 ]
https://en.wikipedia.org/wiki/NiWO4
Niall J. English is an Irish inventor, researcher, [ 1 ] and chartered chemical engineer. He is a co-founder and director of BioSimulytics and AquaB. [ 2 ] [ 3 ] Niall J. English was born in Dublin, Ireland, to Michael and Catherine English. [ citation needed ] He grew up in Dublin and Brussels. [ citation needed ] He speaks Irish and French. [ 3 ] He obtained a first class honours degree in Chemical engineering from University College Dublin in 2000, [ 3 ] and won the Ferdinand de Lesseps medal in French as well as the Engineering Graduates' Association gold medal in his final year in 1999-2000. [ 4 ] [ failed verification ] English completed his Ph.D. in 2003. [ 3 ] During 2004-2005, English explored electric-field effects thereon, at the US DOE National Energy Technology Laboratory in Pittsburgh . [ 2 ] Between 2005 and 2007, English worked for Chemical Computing Group in Cambridge , Great Britain . [ 3 ] During this time, English developed molecular simulation codes, protocols, and methods for biomolecular simulation. [ 3 ] In January 2007, he was hired as a lecturer at the School of Chemical and Bioprocess Engineering. [ 3 ] [ 4 ] He was promoted to senior lecturer in 2014 and professor in 2017. [ 3 ] English’s research specializes in nanoscience, energy , gas hydrates, solar and renewable energies, and simulation of electromagnetic field effects on (nano) materials and biological systems. [ 3 ] In 2019, he is a co-founder and director of BioSimulytics [ 4 ] [ 5 ] and Aqua-B. [ 3 ] [ 4 ] Both companies are backed by the EIC-Accelerator program. [ 3 ] In 2023, he took legal action against University College Dublin to block it from granting a commercialization license to his rival companies. [ 6 ] [ 7 ] The case was settled out of court. [ 8 ] This article about an Irish engineer, inventor or industrial designer is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Niall_J._English
In computing , a nibble , [ 1 ] or spelled nybble to match byte , is a unit of information that is an aggregation of four- bits ; half of a byte/ octet . [ 1 ] [ 2 ] [ 3 ] The unit is alternatively called nyble , nybl , half-byte [ 4 ] or tetrade . [ 5 ] [ 6 ] In networking or telecommunications , the unit is often called a semi-octet , [ 7 ] quadbit , [ 8 ] or quartet . [ 9 ] [ 10 ] As a nibble can represent sixteen ( 2 4 ) possible values, a nibble value is often shown as a hexadecimal digit (hex digit). [ 11 ] A byte is two nibbles, and therefore, a value can be shown as two hex digits. Four-bit computers use nibble-sized data for storage and operations; as the word unit. Such computers were used in early microprocessors , pocket calculators and pocket computers . They continue to be used in some microcontrollers . In this context, 4-bit groups were sometimes also called characters [ 12 ] rather than nibbles. [ 1 ] The term nibble originates from its representing "half a byte", with byte a homophone of the English word bite . [ 4 ] In 2014, David B. Benson, a professor emeritus at Washington State University , remembered that he playfully used (and may have possibly coined) the term nibble as "half a byte" and unit of storage required to hold a binary-coded decimal (BCD) digit around 1958, when talking to a programmer from Los Alamos Scientific Laboratory . The alternative spelling nybble reflects the spelling of byte , as noted in editorials of Kilobaud and Byte in the early 1980s. Another early recorded use of the term nybble was in 1977 within the consumer-banking technology group at Citibank. It created a pre- ISO 8583 standard for transactional messages between cash machines and Citibank's data centers that used the basic data unit 'nabble'. Nibble is used to describe the amount of memory used to store a digit of a number stored in packed decimal format (BCD) within an IBM mainframe. This technique is used to make computations faster and debugging easier. An 8-bit byte is split in half and each nibble is used to store one decimal digit. The last (rightmost) nibble of the variable is reserved for the sign. Thus a variable which can store up to nine digits would be "packed" into 5 bytes. Ease of debugging resulted from the numbers' being readable in a hex dump where two hex numbers are used to represent the value of a byte, as 16×16 = 2 8 . For example, a five-byte BCD value of 31 41 59 26 5C represents a decimal value of +314159265. Historically, there are cases where nybble was used for a group of bits greater than 4. On the Apple II , much of the disk drive control and group-coded recording was implemented in software. Writing data to a disk was done by converting 256-byte pages into sets of 5-bit (later, 6-bit ) nibbles and loading disk data required the reverse. [ 13 ] [ 14 ] [ 15 ] Moreover, 1982 documentation for the Integrated Woz Machine refers consistently to an "8 bit nibble". [ 16 ] The term byte once had the same ambiguity and meant a set of bits but not necessarily 8, hence the distinction of bytes and octets or of nibbles and quartets (or quadbits ). Today, the terms byte and nibble almost always refer to 8-bit and 4-bit collections respectively and are very rarely used to express any other sizes. A nibble-sized value can be represented in different numeric bases: The low and high nibbles of a byte are its two halves that are the less and the more significant bits within the byte, respectively. In a graphical representation of bits within a byte, the leftmost bit could represent the most significant bit ( MSB ), corresponding to ordinary decimal notation in which the digit at the left of a number is the most significant. In such an illustration, the four bits on the left end of the byte form the high nibble, and the remaining four bits form the low nibble. [ 17 ] For example, the high nibble is 0110 2 ( 6 16 ), and the low nibble is 0001 2 ( 1 16 ). The total value is high-nibble × 16 10 + low-nibble ( 6 × 16 + 1 = 97 10 ).
https://en.wikipedia.org/wiki/Nibble
Nibrozetone is an investigational new drug that is being evaluated by EpicentRx for the treatment of oral mucositis in head and neck cancer patients. It is a small molecule that combines direct inhibition of the NLRP3 inflammasome , induction of NRF2 , and release of nitric oxide under hypoxic conditions. [ 1 ] [ 2 ] It has received Fast Track designation from the FDA for severe oral mucositis in head and neck cancer patients. [ 3 ] This pharmacology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nibrozetone
A nick is a discontinuity in a double stranded DNA molecule where there is no phosphodiester bond between adjacent nucleotides of one strand . They typically occur through damage or enzyme action. Nicks allow DNA strands to untwist during replication, and are also thought to play a role in the DNA mismatch repair mechanisms that fix errors on both the leading and lagging daughter strands. [ 1 ] The diagram shows the effects of nicks on intersecting DNA in a twisted plasmid . Nicking can be used to dissipate the energy held up by intersecting states. The nicks allow the DNA to take on a circular shape. [ 2 ] Nicked DNA can be the result of DNA damage or purposeful, regulated biomolecular reactions carried out in the cell. During processing, DNA can be nicked by physical shearing, over-drying, or enzymes. Excessive rough handling in pipetting or vortexing creates physical stress that can lead to breaks and nicks in DNA. Overdrying of DNA can also break the phosphodiester bond in DNA and result in nicks. Nicking enzymes can assist with this process. A nick in DNA can be formed by the hydrolysis and subsequent removal of a phosphate group within the helical backbone. This leads to a different DNA conformation, where a hydrogen bond forms in place of the missing piece of the DNA backbone in order to preserve the structure. [ 3 ] Ligases are versatile and ubiquitous enzymes that join the 3’ hydroxyl and 5’ phosphate ends to form a phosphodiester bond, making them essential in nicked DNA repair, and ultimately genome fidelity. This biological role has also been extremely valuable in sealing the sticky ends of plasmids in molecular cloning. Their importance is attested by the fact most organisms have multiple ligases dedicated to specific pathways of repairing DNA. In eubacteria, these ligases are powered by NAD+ rather than ATP. [ 4 ] Each nick site requires 1 ATP or 1 NAD+ to power the ligase repair. [ 4 ] In order to join these fragments, the ligase progresses through three steps: One particular example of a ligase catalyzing nick closure is the E. coli NAD+ dependent DNA ligase, LigA. LigA is a relevant example as it is structurally similar to a clade of enzymes found across all types of bacteria. [ 7 ] Ligases have a metal binding site which is capable of recognizing nicks in DNA. The ligase forms a DNA-adenylate complex, assisting recognition. [ 8 ] With human DNA ligase, this forms a crystallized complex. The complex, which has a DNA–adenylate intermediate, allows DNA ligase I to institute a conformational change in the DNA for the isolation and subsequent repair of the DNA nick. [ 9 ] Single-stranded nicks act as recognizable markers to help the repair machinery distinguish the newly synthesized strand (daughter strand) from the template strand (parental strand). [ 1 ] DNA mismatch repair (MMR) is an important DNA repair system that helps maintain genome plasticity by correcting mismatches, or non Watson-Crick base pairs in a DNA duplex. [ 10 ] Some sources of mismatched base pairs include replication errors and deamination of 5-methylcytosine DNA to form thymine. MMR in most bacteria and eukaryotes is directed to the erroneous strand of the mismatched duplex through recognition of strand discontinuities, while MMR in E. coli and closely related bacteria is directed to the strand on the basis of the absence of methylation . Nicking endonucleases introduce strand discontinuities, or DNA nicks, for both respective systems. Mut L homologues from eukaryotes and most bacteria incise the discontinuous strand to introduce the entry or termination point for the excision reaction. Similarly, in E. coli , Mut H nicks the unmethylated strand of the duplex to introduce the entry point of excision. [ 11 ] For eukaryotes specifically, the mechanism of DNA replication elongation between the leading and lagging strand differs. On the lagging strand, nicks exist between Okazaki fragments and are easily recognizable by the DNA mismatch repair machinery prior to ligation . Due to the continuous replication that occurs on the leading strand, the mechanism there is slightly more complex. During replication, ribonucleotides are added by replication enzymes and these ribonucleotides are nicked by an enzyme called RNase H2 . [ 1 ] Together, the presence of a nick and a ribonucleotide make the leading strand easily recognizable to the DNA mismatch repair machinery. Nick translation is a biological process in which a single-stranded DNA nick serves as the marker for DNA polymerase to excise and replace possibly damaged nucleotides. [ 3 ] At the end of the segment that DNA polymerase acts on, DNA ligase must repair the final segment of the DNA backbone in order to complete the repair process. [ 4 ] In a lab setting, this can be used to introduce fluorescent or other tagged nucleotides by purposefully inducing site-specific, single-stranded nicks in DNA in vitro and then adding the nicked DNA to an environment rich in DNA polymerase and tagged nucleotide. The DNA polymerase then replaces the DNA nucleotides with the tagged ones, starting at the site of the single-stranded nick. Nicked DNA plays an important role in many biological functions. For instance, single-stranded nicks in DNA may serve as purposeful biological markers for the enzyme topoisomerase that unwinds packed DNA and is critical to DNA replication and transcription. In these instances, nicked DNA is not the result of unwanted cell damage. [ 2 ] Topoisomerase-1 preferentially acts at nicks in DNA to cleave adjacent to the nick and then winds or unwinds the complex topologies associated with packed DNA. Here, the nick in the DNA serves as a marker for single strand breakage and subsequent unwinding. [ 12 ] It is possible that this is not a highly conserved process. Topoisomerase may cause short deletions when it cleaves bonds, because both full-length DNA products and short deletion strands are seen as products of topoisomerase cleavage while inactive mutants only produced full-length DNA strands. [ 13 ] Nicks in DNA also give rise to different structural properties, can be involved in repairing damages caused by ultraviolet radiation, and are used in the primary steps that allow for genetic recombination . [ 14 ] Nick idling is a biological process in which DNA polymerase may slow or stop its activity of adding bases to a new daughter strand during DNA replication at a nick site. [ 4 ] This is particularly relevant to Okazaki fragments in lagging strand in double stranded DNA replication because the direction of replication is opposite to the direction of DNA polymerase, therefore nick idling plays a role in stalling the complex as it replicates in the reverse direction in small fragments (Okazaki fragments) and has to stop and reposition itself in between each and every fragment length of DNA. DNA structure changes when a single-stranded nick is introduced. [ 14 ] Stability is decreased as a break in the phosphodiester backbone allows DNA to unwind , as the built up stress from twisting and packing is not being resisted as strongly anymore. [ 12 ] Nicked DNA is more susceptible to degradation due to this reduced stability. The nic site or nick region is found within the origin of transfer ( oriT ) site and is a key in starting bacterial conjugation . A single strand of DNA, called the T-strand , is cut at nic by an enzyme called relaxase . [ 15 ] This single strand is eventually transferred to the recipient cell during the process of bacterial conjugation. Before this cleavage can occur, however, it is necessary for a group of proteins to attach to the oriT site. This group of proteins is called the relaxosome. [ 15 ] It is thought that portions of the oriT site are bent in a way that creates interaction between the relaxosome proteins and the nic site. [ 15 ] Cleaving the T-strand involves relaxase cutting a phosphodiester bond at the nic site. [ 15 ] The cleaved strand is left with a hydroxyl group at the 3' end, which may allow for the strand to form a circular plasmid after moving into the recipient cell. [ 16 ] [ 17 ] DNA nicks promote crossover formation during meiosis , and such nicks are protected from ligation by Exonuclease 1 (Exo1). [ 18 ]
https://en.wikipedia.org/wiki/Nic_site
Mechanistic models for niche apportionment are biological models used to explain relative species abundance distributions. These niche apportionment models describe how species break up resource pool in multi-dimensional space, determining the distribution of abundances of individuals among species. The relative abundances of species are usually expressed as a Whittaker plot, or rank abundance plot, where species are ranked by number of individuals on the x-axis, plotted against the log relative abundance of each species on the y-axis. The relative abundance can be measured as the relative number of individuals within species or the relative biomass of individuals within species. Niche apportionment models were developed because ecologists sought biological explanations for relative species abundance distributions. MacArthur (1957, 1961), [ 1 ] [ 2 ] was one of the earliest to express dissatisfaction with purely statistical models , presenting instead 3 mechanistic niche apportionment models. MacArthur believed that ecological niches within a resource pool could be broken up like a stick, with each piece of the stick representing niches occupied in the community. With contributions from Sugihara (1980), [ 3 ] Tokeshi (1990, 1993, 1996) [ 4 ] [ 5 ] [ 6 ] expanded upon the broken stick model, when he generated roughly 7 mechanistic niche apportionment models. These mechanistic models provide a useful starting point for describing the species composition of communities. A niche apportionment model can be used in situations where one resource pool is either sequentially or simultaneously broken up into smaller niches by colonizing species or by speciation (clarification on resource use: species within a guild use same resources, while species within a community may not). These models describe how species that draw from the same resource pool (e.g. a guild (ecology) ) partition their niche. The resource pool is broken either sequentially or simultaneously, and the two components of the process of fragmentation of the niche include which fragment is chosen and the size of the resulting fragment (Figure 2). Niche apportionment models have been used in the primary literature to explain, and describe changes in the relative abundance distributions of a diverse array of taxa including, freshwater insects, fish, bryophytes beetles, hymenopteran parasites, plankton assemblages and salt marsh grass. The mechanistic models that describe these plots work under the assumption that rank abundance plots are based on a rigorous estimate of the abundances of individuals within species and that these measures represent the actual species abundance distribution. Furthermore, whether using the number of individuals as the abundance measure or the biomass of individuals, these models assume that this quantity is directly proportional to the size of the niche occupied by an organism. One suggestion is that abundance measured as the numbers of individuals, may exhibit lower variances than those using biomass . Thus, some studies using abundance as a proxy for niche allocation may overestimate the evenness of a community . This happens because there is not a clear distinction of the relationship between body size, abundance (ecology) , and resource use. Often studies fail to incorporate size structure or biomass estimates into measures of actual abundance, and these measure can create a higher variance around the niche apportionment models than abundance measured strictly as the number of individuals. [ 7 ] [ 8 ] Seven mechanistic models that describe niche apportionment are described below. The models are presented in the order of increasing evenness, from least even, the Dominance Pre-emption model to the most even the Dominance Decay and MacArthur Fraction models. This model describes a situation where after initial colonization (or speciation) each new species pre-empts more than 50% of the smallest remaining niche. In a Dominance preemption model of niche apportionment the species colonize random portion between 50 and 100% of the smallest remaining niche, making this model stochastic in nature. A closely related model, the Geometric Series, [ 9 ] is a deterministic version of the Dominance pre-emption model, wherein the percentage of remaining niche space that the new species occupies (k) is always the same. In fact, the dominance pre-emption and geometric series models are conceptually similar and will produce the same relative abundance distribution when the proportion of the smaller niche filled is always 0.75. The dominance pre-emption model is the best fit to the relative abundance distributions of some stream fish communities in Texas , including some taxonomic groupings, and specific functional groupings. [ 10 ] The Geometric (k=0.75) P i = k ( 1 − k ) i − 1 {\displaystyle Pi=k(1-k)^{i-1}} In the random assortment model the resource pool is divided at random among simultaneously or sequentially colonizing species. This pattern could arise because the abundance measure does not scale with the amount of niche occupied by a species or because temporal-variation in species abundance or niche breadth causes discontinuity in niche apportionment over time and thus species appear to have no relationship between extent of occupancy and their niche. Tokeshi (1993) [ 5 ] explained that this model, in many ways, is similar to Caswell's neutral theory of biodiversity, mainly because species appear to act independently of each other. The random fraction model describes a process where niche size is chosen at random by sequentially colonizing species. The initial species chooses a random portion of the total niche and subsequent colonizing species also choose a random portion of the total niche and divide it randomly until all species have colonized. Tokeshi (1990) [ 4 ] found this model to be compatible with some epiphytic Chiromonid shrimp communities, and more recently it has been used to explain the relative abundance distributions of phytoplankton communities, salt meadow vegetation, some communities of insects in the order Diptera, some ground beetle communities, functional and taxonomic groupings of stream fish in Texas bio-regions, and ichneumonid parasitoids. A similar model was developed by Sugihara in an attempt to provide a biological explanation for the log normal distribution of Preston (1948). [ 11 ] Sugihara's (1980) [ 3 ] Fixed Division Model was similar to the random fraction model, but the randomness of the model is drawn from a triangular distribution with a mean of 0.75 rather that a normal distribution with a mean of 0.5 used in the random fraction. Sugihara used a triangular distribution to draw the random variables because the randomness of some natural populations matches a triangular distribution with a mean of 0.75. This model can explain a relative abundance distribution when the probability of colonization an existing niche in a resource pool is positively related to the size of that niche (measured as abundance, biomass etc.). The probability with which a portion of the niche colonized is dependent on the relative sizes of the established niches, and is scaled by an exponent k. k can take a value between 0 and 1 and if k>0 there is always a slightly higher probability that the larger niche will be colonized. This model is toted as being more biologically realistic because one can imagine many cases where the niche with the larger proportion of resources is more likely to be invaded because that niche has more resource space, and thus more opportunity for acquisition. The random fraction model of niche apportionment is an extreme of the power fraction model where k=0, and the other extreme of the power fraction, when k=1 resembles the MacArthur Fraction model where the probability of colonization is directly proportion to niche size. [ 6 ] [ 12 ] This model requires that the initial niche is broken at random and the successive niches are chosen with a probability proportional to their size. In this model the largest niche always has a greater probability of being broken relative to the smaller niches in the resource pool. This model can lead to a more even distribution where larger niches are more likely to be broken facilitating co-existence between species in equivalent sized niches. The basis for the MacArthur Fraction model is the Broken Stick, developed by MacArthur (1957). These models produce similar results, but one of the main conceptual differences is that niches are filled simultaneously in Broken Stick model rather than sequentially as in the MacArthur Fraction. Tokeshi (1993) [ 5 ] argues that sequentially invading a resource pool is more biologically realistic than simultaneously breaking the niche space. When the abundance of fish from all bio-regions in Texas were combined the distribution resembled the broken stick model of niche apportionment, suggesting a relatively even distribution of freshwater fish species in Texas. [ 10 ] This model can be thought of as the inverse to the Dominance pre-emption model. First, the initial resource pool is colonized randomly and the remaining, subsequent colonizers always colonize the largest niche, whether or not it is already colonized. This model generates the most even community relative to the niche apportionment models described above because the largest niche is always broken into two smaller fragments that are more likely to be equivalent to the size of the smaller niche that was not broken. Communities of this “level” of evenness seem to be rare in natural systems. However, one such community includes the relative abundance distribution of filter feeders in one site within the River Danube in Austria. [ 13 ] A composite model exists when a combination of niche apportionment models are acting in different portions of the resource pool. Fesl (2002). [ 13 ] shows how a composite model might appear in a study of freshwater Diptera , in that different niche apportionment models fit different functional groups of the data. Another example by Higgins and Strauss (2008), modeling fish assemblages, found that fish communities from different habitats and with different species compositions conform to different niche apportionment models, thus the entire species assemblage was a combination of models in different regions of the species range. Mechanistic models of niche apportionment are intended to describe communities. Researchers have used these models in many ways to investigate the temporal and geographic trends in species abundance. For many years the fit of niche apportionment models was conducted by eye and graphs of the models were compared with empirical data. [ 5 ] More recently statistical tests of the fit of niche apportionment models to empirical data have been developed. [ 14 ] [ 15 ] The later method (Etienne and Ollf 2005) [ 15 ] uses a Bayesian simulation of the models to test their fit to empirical data. The former method, which is still commonly used, simulates the expected relative abundances, from a normal distribution, of each model given the same number of species as the empirical data. Each model is simulated multiple times, and mean and standard deviation can be calculated to assign confidence intervals around each relative abundance distribution. The confidence around each rank can be tested against empirical data for each model to determine model fit. The confidence intervals are calculated as follows. [ 4 ] [ 12 ] For more information on the simulation of niche apportionment models the website [1] [ permanent dead link ] , which explains the program Power Niche. [ 14 ] R ( x i ) = μ i ± r σ i n {\displaystyle R(x_{i})=\mu _{i}\pm {\frac {r\sigma _{i}}{\sqrt {n}}}} r=confidence limit of simulated data σ=standard deviation of simulated data n=number of replicates of empirical sample
https://en.wikipedia.org/wiki/Niche_apportionment_models
Niche construction is the ecological process by which an organism alters its own (or another species') local environment. These alterations can be a physical change to the organism’s environment, or it can encompass the active movement of an organism from one habitat to another where it then experiences different environmental pressures. Examples of niche construction include the building of nests and burrows by animals, the creation of shade, the influencing of wind speed, and alternations to nutrient cycling by plants. Although these modifications are often directly beneficial to the constructor , they are not necessarily always. For example, when organisms dump detritus , they can degrade their own local environments. Within some biological evolutionary frameworks, niche construction can actively beget processes pertaining to ecological inheritance whereby the organism in question “constructs” new or unique ecologic, and perhaps even sociologic environmental realities characterized by specific selective pressures . For niche construction to affect evolution it must satisfy three criteria: 1) the organism must significantly modify environmental conditions, 2) these modifications must influence one or more selection pressures on a recipient organism, and 3) there must be an evolutionary response in at least one recipient population caused by the environmental modification. [ 1 ] [ 2 ] The first two criteria alone provide evidence of niche construction. Recently, some biologists have argued that niche construction is an evolutionary process that works in conjunction with natural selection . [ 1 ] Evolution entails networks of feedbacks in which previously selected organisms drive environmental changes, and organism-modified environments subsequently select for changes in organisms. [ 1 ] [ 3 ] [ 4 ] The complementary match between an organism and its environment results from the two processes of natural selection and niche construction. The effect of niche construction is especially pronounced in situations where environmental alterations persist for several generations, introducing the evolutionary role of ecological inheritance . This theory emphasizes that organisms inherit two legacies from their ancestors: genes and a modified environment. A niche constructing organism may or may not be considered an ecosystem engineer . Ecosystem engineering is a related but non-evolutionary concept referring to structural changes brought about in the environment by organisms. [ 5 ] The following are some examples of niche construction: As creatures construct new niches, they can have a significant effect on the world around them. [ 1 ] Niche construction theory (NCT) has been anticipated by diverse people in the past, including by the physicist Erwin Schrödinger in his What Is Life? and Mind and Matter essays (1944). An early advocate of the niche construction perspective in biology was the developmental biologist, Conrad Waddington . He drew his attention to the many ways in which animals modify their selective environments throughout their lives, by choosing and changing their environmental conditions, a phenomenon that he termed "the exploitive system". [ 14 ] The niche construction perspective was subsequently brought to prominence through the writings of Harvard evolutionary biologist, Richard Lewontin . In the 1970s and 1980s Lewontin wrote a series of articles on adaptation, in which he pointed out that organisms do not passively adapt through selection to pre-existing conditions, but actively construct important components of their niches. [ 4 ] Oxford biologist John Odling-Smee (1988) was the first person to coin the term 'niche construction', and the first to make the argument that ‘niche construction’ and ‘ ecological inheritance ’ should be recognized as evolutionary processes. [ 15 ] Over the next decade research into niche construction increased rapidly, with a rush of experimental and theoretical studies across a broad range of fields. Mathematical evolutionary theory explores both the evolution of niche construction, and its evolutionary and ecological consequences. These analyses suggest that niche construction is of considerable importance. For instance, niche construction can: Niche construction theory has had a particular impact in the human sciences, including biological anthropology , [ 24 ] archaeology , [ 25 ] and psychology . [ 26 ] Niche construction is now recognized to have played important roles in human evolution , [ 24 ] [ 27 ] including the evolution of cognitive capabilities. [ 28 ] Its impact is probably because it is immediately apparent that humans possess an unusually potent capability to regulate, construct and destroy their environments, and that this is generating some pressing current problems (e.g. climate change , deforestation , urbanization ). However, human scientists have been attracted to the niche construction perspective because it recognizes human activities as a directing process, rather than merely the consequence of natural selection . [ 1 ] [ 25 ] Cultural niche construction can also feed back to affect other cultural processes, even affecting genetics. Niche construction theory emphasizes how acquired characters play an evolutionary role, through transforming selective environments. This is particularly relevant to human evolution, where our species appears to have engaged in extensive environmental modification through cultural practices. [ 29 ] Such cultural practices are typically not themselves biological adaptations (rather, they are the adaptive product of those much more general adaptations, such as the ability to learn, particularly from others, to teach, to use language, and so forth, that underlie human culture). Mathematical models have established that cultural niche construction can modify natural selection on human genes and drive evolutionary events. This interaction is known as gene-culture coevolution . There is now little doubt that human cultural niche construction has co-directed human evolution. [ 29 ] Humans have modified selection, for instance, by dispersing into new environments with different climatic regimes, devising agricultural practices or domesticating livestock. A well-researched example is the finding that dairy farming created the selection pressure that led to the spread of alleles for adult lactase persistence. [ 30 ] Analyses of the human genome have identified many hundreds of genes subject to recent selection, and human cultural activities are thought to be a major source of selection in many cases. The lactase persistence example may be representative of a very general pattern of gene-culture coevolution. Niche construction is also now central to several accounts of how language evolved. For instance, Derek Bickerton describes how our ancestors constructed scavenging niches that required them to communicate in order to recruit sufficient individuals to drive off predators away from megafauna corpses. [ 28 ] He maintains that our use of language, in turn, created a new niche in which sophisticated cognition was beneficial. While the fact that niche construction occurs is non-contentious, and its study goes back to Darwin's classic books on earthworms and corals , the evolutionary consequences of niche construction have not always been fully appreciated. Researchers differ over to what extent niche construction requires changes in understanding of the evolutionary process. Many advocates of the niche-construction perspective align themselves with other progressive elements in seeking an extended evolutionary synthesis , [ 31 ] [ 32 ] a stance that other prominent evolutionary biologists reject. [ 33 ] Laubichler and Renn [ 32 ] argue that niche construction theory offers the prospect of a broader synthesis of evolutionary phenomena through "the notion of expanded and multiple inheritance systems (from genomic to ecological, social and cultural)." [ 32 ] Niche construction theory (NCT) remains controversial, particularly amongst orthodox evolutionary biologists. [ 34 ] [ 35 ] In particular, the claim that niche construction is an evolutionary process has excited controversy. A collaboration between some critics of the niche-construction perspective and one of its advocates attempted to pinpoint their differences. [ 35 ] They wrote: "NCT argues that niche construction is a distinct evolutionary process, potentially of equal importance to natural selection. The skeptics dispute this. For them, evolutionary processes are processes that change gene frequencies , of which they identify four ( natural selection , genetic drift , mutation , migration [ie. gene flow ])... They do not see how niche construction either generates or sorts genetic variation independently of these other processes, or how it changes gene frequencies in any other way. In contrast, NCT adopts a broader notion of an evolutionary process, one that it shares with some other evolutionary biologists. Although the advocate agrees that there is a useful distinction to be made between processes that modify gene frequencies directly, and factors that play different roles in evolution... The skeptics probably represent the majority position: evolutionary processes are those that change gene frequencies. Advocates of NCT, in contrast, are part of a sizable minority of evolutionary biologists that conceive of evolutionary processes more broadly, as anything that systematically biases the direction or rate of evolution, a criterion that they (but not the skeptics) feel niche construction meets." [ 35 ] The authors conclude that their disagreements reflect a wider dispute within evolutionary theory over whether the modern synthesis is in need of reformulation, as well as different usages of some key terms (e.g., evolutionary process). Further controversy surrounds the application of niche construction theory to the origins of agriculture within archaeology. In a 2015 review, archaeologist Bruce Smith concluded: "Explanations [for domestication of plants and animals] based on diet breadth modeling are found to have a number of conceptual, theoretical, and methodological flaws; approaches based on niche construction theory are far better supported by the available evidence in the two regions considered [eastern North America and the Neotropics ]". [ 36 ] However, other researchers see no conflict between niche construction theory and the application of behavioral ecology methods in archaeology. [ 37 ] [ 38 ] A critical review by Manan Gupta and colleagues was published in 2017 which led to a dispute amongst critics and proponents. [ 39 ] [ 40 ] [ 41 ] [ clarification needed ] In 2018 another review updates the importance of niche construction and extragenetic adaptation in evolutionary processes. [ 42 ]
https://en.wikipedia.org/wiki/Niche_construction
In ecology , a niche is the match of a species to a specific environmental condition. [ 1 ] [ 2 ] It describes how an organism or population responds to the distribution of resources and competitors (for example, by growing when resources are abundant, and when predators , parasites and pathogens are scarce) and how it in turn alters those same factors (for example, limiting access to resources by other organisms, acting as a food source for predators and a consumer of prey). "The type and number of variables comprising the dimensions of an environmental niche vary from one species to another [and] the relative importance of particular environmental variables for a species may vary according to the geographic and biotic contexts". [ 3 ] A Grinnellian niche is determined by the habitat in which a species lives and its accompanying behavioral adaptations . An Eltonian niche emphasizes that a species not only grows in and responds to an environment, it may also change the environment and its behavior as it grows. The Hutchinsonian niche uses mathematics and statistics to try to explain how species coexist within a given community. The concept of ecological niche is central to ecological biogeography , which focuses on spatial patterns of ecological communities. [ 4 ] "Species distributions and their dynamics over time result from properties of the species, environmental variation..., and interactions between the two—in particular the abilities of some species, especially our own, to modify their environments and alter the range dynamics of many other species." [ 5 ] Alteration of an ecological niche by its inhabitants is the topic of niche construction . [ 6 ] The majority of species exist in a standard ecological niche, sharing behaviors, adaptations, and functional traits similar to the other closely related species within the same broad taxonomic class, but there are exceptions. A premier example of a non-standard niche filling species is the flightless, ground-dwelling kiwi bird of New Zealand, which feeds on worms and other ground creatures, and lives its life in a mammal-like niche. Island biogeography can help explain island species and associated unfilled niches. The ecological meaning of niche comes from the meaning of niche as a recess in a wall for a statue, [ 7 ] which itself is probably derived from the Middle French word nicher , meaning to nest . [ 8 ] [ 7 ] The term was coined by the naturalist Roswell Hill Johnson [ 9 ] but Joseph Grinnell was probably the first to use it in a research program in 1917, in his paper "The niche relationships of the California Thrasher". [ 10 ] [ 1 ] The Grinnellian niche concept embodies the idea that the niche of a species is determined by the habitat in which it lives and its accompanying behavioral adaptations . In other words, the niche is the sum of the habitat requirements and behaviors that allow a species to persist and produce offspring. For example, the behavior of the California thrasher is consistent with the chaparral habitat it lives in—it breeds and feeds in the underbrush and escapes from its predators by shuffling from underbrush to underbrush. Its 'niche' is defined by the felicitous complementing of the thrasher's behavior and physical traits (camouflaging color, short wings, strong legs) with this habitat. [ 10 ] Grinnellian niches can be defined by non-interactive (abiotic) variables and environmental conditions on broad scales. [ 11 ] Variables of interest in this niche class include average temperature, precipitation, solar radiation, and terrain aspect which have become increasingly accessible across spatial scales. Most literature has focused on Grinnellian niche constructs, often from a climatic perspective, to explain distribution and abundance. Current predictions on species responses to climate change strongly rely on projecting altered environmental conditions on species distributions. [ 12 ] However, it is increasingly acknowledged that climate change also influences species interactions and an Eltonian perspective may be advantageous in explaining these processes. This perspective of niche allows for the existence of both ecological equivalents and empty niches. An ecological equivalent to an organism is an organism from a different taxonomic group exhibiting similar adaptations in a similar habitat, an example being the different succulents found in American and African deserts, cactus and euphorbia , respectively. [ 13 ] As another example, the anole lizards of the Greater Antilles are a rare example of convergent evolution , adaptive radiation , and the existence of ecological equivalents: the anole lizards evolved in similar microhabitats independently of each other and resulted in the same ecomorphs across all four islands. In 1927 Charles Sutherland Elton , a British ecologist , defined a niche as follows: "The 'niche' of an animal means its place in the biotic environment, its relations to food and enemies ." [ 14 ] Elton classified niches according to foraging activities ("food habits"): [ 15 ] For instance there is the niche that is filled by birds of prey which eat small animals such as shrews and mice. In an oak wood this niche is filled by tawny owls , while in the open grassland it is occupied by kestrels . The existence of this carnivore niche is dependent on the further fact that mice form a definite herbivore niche in many different associations, although the actual species of mice may be quite different. [ 14 ] Conceptually, the Eltonian niche introduces the idea of a species' response to and effect on the environment. Unlike other niche concepts, it emphasizes that a species not only grows in and responds to an environment based on available resources, predators, and climatic conditions, but also changes the availability and behavior of those factors as it grows. [ 16 ] In an extreme example, beavers require certain resources in order to survive and reproduce, but also construct dams that alter water flow in the river where the beaver lives. Thus, the beaver affects the biotic and abiotic conditions of other species that live in and near the watershed. [ 17 ] In a more subtle case, competitors that consume resources at different rates can lead to cycles in resource density that differ between species. [ 18 ] Not only do species grow differently with respect to resource density, but their own population growth can affect resource density over time . Eltonian niches focus on biotic interactions and consumer–resource dynamics (biotic variables) on local scales. [ 11 ] Because of the narrow extent of focus, data sets characterizing Eltonian niches typically are in the form of detailed field studies of specific individual phenomena, as the dynamics of this class of niche are difficult to measure at a broad geographic scale. However, the Eltonian niche may be useful in the explanation of a species' endurance of global change. [ 16 ] Because adjustments in biotic interactions inevitably change abiotic factors, Eltonian niches can be useful in describing the overall response of a species to new environments. The Hutchinsonian niche is an " n-dimensional hypervolume", where the dimensions are environmental conditions and resources , that define the requirements of an individual or a species to practice its way of life, more particularly, for its population to persist. [ 2 ] The "hypervolume" defines the multi-dimensional space of resources (e.g., light, nutrients, structure, etc.) available to (and specifically used by) organisms, and "all species other than those under consideration are regarded as part of the coordinate system." [ 19 ] The niche concept was popularized by the zoologist G. Evelyn Hutchinson in 1957. [ 19 ] Hutchinson inquired into the question of why there are so many types of organisms in any one habitat. His work inspired many others to develop models to explain how many and how similar coexisting species could be within a given community, and led to the concepts of 'niche breadth' (the variety of resources or habitats used by a given species), 'niche partitioning' (resource differentiation by coexisting species), and 'niche overlap' (overlap of resource use by different species). [ 20 ] Statistics were introduced into the Hutchinson niche by Robert MacArthur and Richard Levins using the 'resource-utilization' niche employing histograms to describe the 'frequency of occurrence' as a function of a Hutchinson coordinate. [ 2 ] [ 21 ] So, for instance, a Gaussian might describe the frequency with which a species ate prey of a certain size, giving a more detailed niche description than simply specifying some median or average prey size. For such a bell-shaped distribution, the position , width and form of the niche correspond to the mean , standard deviation and the actual distribution itself. [ 22 ] One advantage in using statistics is illustrated in the figure, where it is clear that for the narrower distributions (top) there is no competition for prey between the extreme left and extreme right species, while for the broader distribution (bottom), niche overlap indicates competition can occur between all species. The resource-utilization approach postulates that not only can competition occur, but that it does occur, and that overlap in resource utilization directly enables the estimation of the competition coefficients. [ 23 ] This postulate, however, can be misguided, as it ignores the impacts that the resources of each category have on the organism and the impacts that the organism has on the resources of each category. For instance, the resource in the overlap region can be non-limiting, in which case there is no competition for this resource despite niche overlap. [ 1 ] [ 20 ] [ 23 ] An organism free of interference from other species could use the full range of conditions (biotic and abiotic) and resources in which it could survive and reproduce which is called its fundamental niche . [ 24 ] However, as a result of pressure from, and interactions with, other organisms (i.e. inter-specific competition) species are usually forced to occupy a niche that is narrower than this, and to which they are mostly highly adapted ; this is termed the realized niche . [ 24 ] Hutchinson used the idea of competition for resources as the primary mechanism driving ecology, but overemphasis upon this focus has proved to be a handicap for the niche concept. [ 20 ] In particular, overemphasis upon a species' dependence upon resources has led to too little emphasis upon the effects of organisms on their environment, for instance, colonization and invasions. [ 20 ] The term "adaptive zone" was coined by the paleontologist George Gaylord Simpson to explain how a population could jump from one niche to another that suited it, jump to an 'adaptive zone', made available by virtue of some modification, or possibly a change in the food chain , that made the adaptive zone available to it without a discontinuity in its way of life because the group was 'pre-adapted' to the new ecological opportunity. [ 25 ] Hutchinson's "niche" (a description of the ecological space occupied by a species) is subtly different from the "niche" as defined by Grinnell (an ecological role, that may or may not be actually filled by a species—see vacant niches ). A niche is a very specific segment of ecospace occupied by a single species. On the presumption that no two species are identical in all respects (called Hardin's 'axiom of inequality' [ 26 ] ) and the competitive exclusion principle , some resource or adaptive dimension will provide a niche specific to each species. [ 24 ] Species can however share a 'mode of life' or 'autecological strategy' which are broader definitions of ecospace. [ 27 ] For example, Australian grasslands species, though different from those of the Great Plains grasslands, exhibit similar modes of life. [ 28 ] Once a niche is left vacant, other organisms can fill that position. For example, the niche that was left vacant by the extinction of the tarpan has been filled by other animals (in particular a small horse breed, the konik ). Also, when plants and animals are introduced into a new environment, they have the potential to occupy or invade the niche or niches of native organisms, often outcompeting the indigenous species. Introduction of non-indigenous species to non-native habitats by humans often results in biological pollution by the exotic or invasive species . The mathematical representation of a species' fundamental niche in ecological space, and its subsequent projection back into geographic space, is the domain of niche modelling . [ 29 ] Contemporary niche theory (also called "classic niche theory" in some contexts) is a framework that was originally designed to reconcile different definitions of niches (see Grinnellian, Eltonian, and Hutchinsonian definitions above), and to help explain the underlying processes that affect Lotka–Volterra relationships within an ecosystem. The framework centers around "consumer-resource models" which largely split a given ecosystem into resources (e.g. sunlight or available water in soil) and consumers (e.g. any living thing, including plants and animals), and attempts to define the scope of possible relationships that could exist between the two groups. [ 30 ] In contemporary niche theory, the "impact niche" is defined as the combination of effects that a given consumer has on both a). the resources that it uses, and b). the other consumers in the ecosystem. Therefore, the impact niche is equivalent to the Eltonian niche since both concepts are defined by the impact of a given species on its environment. [ 30 ] The range of environmental conditions where a species can successfully survive and reproduce (i.e. the Hutchinsonian definition of a realized niche) is also encompassed under contemporary niche theory, termed the "requirement niche". The requirement niche is bounded by both the availability of resources as well as the effects of coexisting consumers (e.g. competitors and predators). [ 30 ] Contemporary niche theory provides three requirements that must be met in order for two species (consumers) to coexist: [ 30 ] These requirements are interesting and controversial because they require any two species to share a certain environment (have overlapping requirement niches) but fundamentally differ the ways that they use (or "impact") that environment. These requirements have repeatedly been violated by nonnative (i.e. introduced and invasive ) species, which often coexist with new species in their nonnative ranges, but do not appear to be constricted these requirements. In other words, contemporary niche theory predicts that species will be unable to invade new environments outside of their requirement (i.e. realized) niche, yet many examples of this are well-documented. [ 31 ] [ 32 ] Additionally, contemporary niche theory predicts that species will be unable to establish in environments where other species already consume resources in the same ways as the incoming species, however examples of this are also numerous. [ 33 ] [ 32 ] In ecology , niche differentiation (also known as niche segregation , niche separation and niche partitioning ) refers to the process by which competing species use the environment differently in a way that helps them to coexist. [ 34 ] The competitive exclusion principle states that if two species with identical niches (ecological roles) compete , then one will inevitably drive the other to extinction. [ 35 ] This rule also states that two species cannot occupy the same exact niche in a habitat and coexist together, at least in a stable manner. [ 36 ] When two species differentiate their niches, they tend to compete less strongly, and are thus more likely to coexist. Species can differentiate their niches in many ways, such as by consuming different foods, or using different areas of the environment. [ 37 ] As an example of niche partitioning, several anole lizards in the Caribbean islands share common diets—mainly insects. They avoid competition by occupying different physical locations. Although these lizards might occupy different locations, some species can be found inhabiting the same range, with up to 15 in certain areas. [ 38 ] For example, some live on the ground while others are arboreal. Species who live in different areas compete less for food and other resources, which minimizes competition between species. However, species who live in similar areas typically compete with each other. [ 39 ] The Lotka–Volterra equation states that two competing species can coexist when intra-specific (within species) competition is greater than inter-specific (between species) competition. [ 40 ] Since niche differentiation concentrates competition within-species, due to a decrease in between-species competition, the Lotka-Volterra model predicts that niche differentiation of any degree will result in coexistence. In reality, this still leaves the question of how much differentiation is needed for coexistence. [ 41 ] A vague answer to this question is that the more similar two species are, the more finely balanced the suitability of their environment must be in order to allow coexistence. There are limits to the amount of niche differentiation required for coexistence, and this can vary with the type of resource, the nature of the environment, and the amount of variation both within and between the species. To answer questions about niche differentiation, it is necessary for ecologists to be able to detect, measure, and quantify the niches of different coexisting and competing species. This is often done through a combination of detailed ecological studies, controlled experiments (to determine the strength of competition), and mathematical models . [ 42 ] [ 43 ] To understand the mechanisms of niche differentiation and competition, much data must be gathered on how the two species interact, how they use their resources, and the type of ecosystem in which they exist, among other factors. In addition, several mathematical models exist to quantify niche breadth, competition, and coexistence (Bastolla et al. 2005). However, regardless of methods used, niches and competition can be distinctly difficult to measure quantitatively, and this makes detection and demonstration of niche differentiation difficult and complex. Over time, two competing species can either coexist, through niche differentiation or other means, or compete until one species becomes locally extinct . Several theories exist for how niche differentiation arises or evolves given these two possible outcomes. Niche differentiation can arise from current competition. For instance, species X has a fundamental niche of the entire slope of a hillside, but its realized niche is only the top portion of the slope because species Y, which is a better competitor but cannot survive on the top portion of the slope, has excluded it from the lower portion of the slope. With this scenario, competition will continue indefinitely in the middle of the slope between these two species. Because of this, detection of the presence of niche differentiation (through competition) will be relatively easy. Importantly, there is no evolutionary change of the individual species in this case; rather this is an ecological effect of species Y out-competing species X within the bounds of species Y's fundamental niche. Another way by which niche differentiation can arise is via the previous elimination of species without realized niches. This asserts that at some point in the past, several species inhabited an area, and all of these species had overlapping fundamental niches. However, through competitive exclusion, the less competitive species were eliminated, leaving only the species that were able to coexist (i.e. the most competitive species whose realized niches did not overlap). Again, this process does not include any evolutionary change of individual species, but it is merely the product of the competitive exclusion principle. Also, because no species is out-competing any other species in the final community, the presence of niche differentiation will be difficult or impossible to detect. Finally, niche differentiation can arise as an evolutionary effect of competition. In this case, two competing species will evolve different patterns of resource use so as to avoid competition. Here too, current competition is absent or low, and therefore detection of niche differentiation is difficult or impossible. Below is a list of ways that species can partition their niche. This list is not exhaustive, but illustrates several classic examples. Resource partitioning is the phenomenon where two or more species divides out resources like food, space, resting sites etc. to coexist. For example, some lizard species appear to coexist because they consume insects of differing sizes. [ 44 ] Alternatively, species can coexist on the same resources if each species is limited by different resources, or differently able to capture resources. Different types of phytoplankton can coexist when different species are differently limited by nitrogen, phosphorus, silicon, and light. [ 45 ] In the Galapagos Islands , finches with small beaks are more able to consume small seeds, and finches with large beaks are more able to consume large seeds. If a species' density declines, then the food it most depends on will become more abundant (since there are so few individuals to consume it). As a result, the remaining individuals will experience less competition for food. Although "resource" generally refers to food, species can partition other non-consumable objects, such as parts of the habitat. For example, warblers are thought to coexist because they nest in different parts of trees. [ 46 ] Species can also partition habitat in a way that gives them access to different types of resources. As stated in the introduction, anole lizards appear to coexist because each uses different parts of the forests as perch locations. [ 39 ] This likely gives them access to different species of insects. Research has determined that plants can recognize each other's root systems and differentiate between a clone, a plant grown from the same mother plants seeds, and other species. Based on the root secretions, also called exudates, plants can make this determination. [ 47 ] The communication between plants starts with the secretions from plant roots into the rhizosphere. If another plant that is kin is entering this area the plant will take up exudates. The exudate, being several different compounds, will enter the plants root cell and attach to a receptor for that chemical halting growth of the root meristem in that direction, if the interaction is kin. [ 48 ] Simonsen discusses how plants accomplish root communication with the addition of beneficial rhizobia and fungal networks and the potential for different genotypes of the kin plants, such as the legume M. Lupulina, and specific strains of nitrogen fixing bacteria and rhizomes can alter relationships between kin and non-kin competition. [ 49 ] This means there could be specific subsets of genotypes in kin plants that selects well with specific strains that could outcompete other kin. [ 47 ] What might seem like an instance in kin competition could just be different genotypes of organisms at play in the soil that increase the symbiotic efficiency. Predator partitioning occurs when species are attacked differently by different predators (or natural enemies more generally). For example, trees could differentiate their niche if they are consumed by different species of specialist herbivores , such as herbivorous insects. If a species density declines, so too will the density of its natural enemies, giving it an advantage. Thus, if each species is constrained by different natural enemies, they will be able to coexist. [ 50 ] Early work focused on specialist predators; [ 50 ] however, more recent studies have shown that predators do not need to be pure specialists, they simply need to affect each prey species differently. [ 51 ] [ 52 ] The Janzen–Connell hypothesis represents a form of predator partitioning. [ 53 ] Conditional differentiation (sometimes called temporal niche partitioning ) occurs when species differ in their competitive abilities based on varying environmental conditions. For example, in the Sonoran Desert , some annual plants are more successful during wet years, while others are more successful during dry years. [ 54 ] As a result, each species will have an advantage in some years, but not others. When environmental conditions are most favorable, individuals will tend to compete most strongly with member of the same species. For example, in a dry year, dry-adapted plants will tend to be most limited by other dry-adapted plants. [ 54 ] This can help them to coexist through a storage effect . Species can differentiate their niche via a competition-predation trade-off if one species is a better competitor when predators are absent, and the other is better when predators are present. Defenses against predators, such as toxic compounds or hard shells, are often metabolically costly. As a result, species that produce such defenses are often poor competitors when predators are absent. Species can coexist through a competition-predation trade-off if predators are more abundant when the less defended species is common, and less abundant if the well-defended species is common. [ 55 ] This effect has been criticized as being weak, because theoretical models suggest that only two species within a community can coexist because of this mechanism. [ 56 ] Two ecological paradigms deal with the problem. The first paradigm predominates in what may be called "classical" ecology. It assumes that niche space is largely saturated with individuals and species, leading to strong competition. Niches are restricted because "neighbouring" species, i.e., species with similar ecological characteristics such as similar habitats or food preferences, prevent expansion into other niches or even narrow niches down. This continual struggle for existence is an important assumption of natural selection introduced by Darwin as an explanation for evolution. The other paradigm assumes that niche space is to a large degree vacant, i.e., that there are many vacant niches . It is based on many empirical studies [ 57 ] [ 58 ] [ 59 ] and theoretical investigations especially of Kauffman 1993. [ 60 ] Causes of vacant niches may be evolutionary contingencies or brief or long-lasting environmental disturbances. Both paradigms agree that species are never "universal" in the sense that they occupy all possible niches; they are always specialized, although the degree of specialization varies. For example, there is no universal parasite which infects all host species and microhabitats within or on them. However, the degree of host specificity varies strongly. Thus, Toxoplasma (Protista) infects numerous vertebrates including humans, Enterobius vermicularis infects only humans. The following mechanisms for niche restriction and segregation have been proposed: Niche restriction : Niche segregation : Both paradigms acknowledge a role for all mechanisms (except possibly for that of random selection of niches in the first paradigm), but emphasis on the various mechanisms varies. The first paradigm stresses the paramount importance of interspecific competition, whereas the second paradigm tries to explain many cases which are thought to be due to competition in the first paradigm, by reinforcement of reproductive barriers and/or random selection of niches. – Many authors believe in the overriding importance of interspecific competition. Intuitively, one would expect that interspecific competition is of particular importance in all those cases in which sympatric species (i.e., species occurring together in the same area) with large population densities use the same resources and largely exhaust them. However, Andrewartha and Birch (1954,1984) [ 63 ] [ 64 ] and others have pointed out that most natural populations usually don't even approach exhaustion of resources, and too much emphasis on interspecific competition is therefore wrong. Concerning the possibility that competition has led to segregation in the evolutionary past, Wiens (1974, 1984) [ 65 ] [ 66 ] concluded that such assumptions cannot be proven, and Connell (1980) [ 67 ] found that interspecific competition as a mechanism of niche segregation has been proven only for some pest insects. Barker (1983), [ 68 ] in his review of competition in Drosophila and related genera, which are among the best known animal groups, concluded that the idea of niche segregation by interspecific competition is attractive, but that no study has yet been able to show a mechanism responsible for segregation. Without specific evidence, the possibility of random segregation can never be excluded, and assumption of such randomness can indeed serve as a null-model. – Many physiological and morphological differences between species can prevent hybridization. Evidence for niche segregation as the result of reinforcement of reproductive barriers is especially convincing in those cases in which such differences are not found in allopatric but only in sympatric locations. For example, Kawano (2002) [ 69 ] has shown this for giant rhinoceros beetles in Southeast Asia. Two closely related species occur in 12 allopatric (i.e., in different areas) and 7 sympatric (i.e., in the same area) locations. In the former, body length and length of genitalia are practically identical, in the latter, they are significantly different, and much more so for the genitalia than the body, convincing evidence that reinforcement is an important factor (and possibly the only one) responsible for niche segregation. - The very detailed studies of communities of Monogenea parasitic on the gills of marine and freshwater fishes by several authors have shown the same. Species use strictly defined microhabitats and have very complex copulatory organs. This and the fact that fish replicas are available in almost unlimited numbers, makes them ideal ecological models. Many congeners (species belonging to the same genus) and non-congeners were found on single host species. The maximum number of congeners was nine species. The only limiting factor is space for attachment, since food (blood, mucus, fast regenerating epithelial cells) is in unlimited supply as long as the fish is alive. Various authors, using a variety of statistical methods, have consistently found that species with different copulatory organs may co-occur in the same microhabitat, whereas congeners with identical or very similar copulatory organs are spatially segregated, convincing evidence that reinforcement and not competition is responsible for niche segregation. [ 70 ] [ 71 ] [ 72 ] [ 73 ] [ 74 ] [ 75 ] For a detailed discussion, especially of competition and reinforcement of reproductive barriers, see [ 59 ] Some competing species have been shown to coexist on the same resource with no observable evidence of niche differentiation and in "violation" of the competitive exclusion principle. One instance is in a group of hispine beetle species. [ 42 ] These beetle species, which eat the same food and occupy the same habitat, coexist without any evidence of segregation or exclusion. The beetles show no aggression either intra- or inter-specifically. Coexistence may be possible through a combination of non-limiting food and habitat resources and high rates of predation and parasitism , though this has not been demonstrated. This example illustrates that the evidence for niche differentiation is by no means universal. Niche differentiation is also not the only means by which coexistence is possible between two competing species. [ 76 ] However, niche differentiation is a critically important ecological idea which explains species coexistence, thus promoting the high biodiversity often seen in many of the world's biomes . Research using mathematical modelling is indeed demonstrating that predation can indeed stabilize lumps of very similar species. Willow warbler and chiffchaff and other very similar warblers can serve as an example. The idea is that it is also a good strategy to be very similar to a successful species or have enough dissimilarity. Other examples of nearly identical species clusters occupying the same niche were water beetles, prairie birds and algae. The basic idea is that there can be clusters of very similar species all applying the same successful strategy and between them open spaces. Here the species cluster takes the place of a single species in the classical ecological models. [ 77 ] The geographic range of a species can be viewed as a spatial reflection of its niche, along with characteristics of the geographic template and the species that influence its potential to colonize. The fundamental geographic range of a species is the area it occupies in which environmental conditions are favorable, without restriction from barriers to disperse or colonize. [ 4 ] A species will be confined to its realized geographic range when confronting biotic interactions or abiotic barriers that limit dispersal, a more narrow subset of its larger fundamental geographic range. An early study on ecological niches conducted by Joseph H. Connell analyzed the environmental factors that limit the range of a barnacle ( Chthamalus stellatus ) on Scotland's Isle of Cumbrae. [ 78 ] In his experiments, Connell described the dominant features of C. stellatus niches and provided explanation for their distribution on intertidal zone of the rocky coast of the Isle. Connell described the upper portion of C. stellatus's range is limited by the barnacle's ability to resist dehydration during periods of low tide. The lower portion of the range was limited by interspecific interactions, namely competition with a cohabiting barnacle species and predation by a snail. [ 78 ] By removing the competing B. balanoides , Connell showed that C. stellatus was able to extend the lower edge of its realized niche in the absence of competitive exclusion . These experiments demonstrate how biotic and abiotic factors limit the distribution of an organism. The different dimensions, or plot axes , of a niche represent different biotic and abiotic variables. These factors may include descriptions of the organism's life history , habitat , trophic position (place in the food chain ), and geographic range. According to the competitive exclusion principle , no two species can occupy the same niche in the same environment for a long time. The parameters of a realized niche are described by the realized niche width of that species. [ 26 ] Some plants and animals, called specialists , need specific habitats and surroundings to survive, such as the spotted owl , which lives specifically in old growth forests. Other plants and animals, called generalists, are not as particular and can survive in a range of conditions, for example the dandelion . [ 79 ]
https://en.wikipedia.org/wiki/Niche_segregation
Nicholas Charles Handy FRS [ 1 ] (17 June 1941 – 2 October 2012) was a British theoretical chemist. [ 4 ] [ 5 ] He retired as Professor of quantum chemistry at the University of Cambridge in September 2004. [ 6 ] Handy was born in Wiltshire , England and educated at Clayesmore School . [ 7 ] He studied the Mathematical Tripos at the University of Cambridge [ 3 ] and completed his PhD on theoretical chemistry supervised by Samuel Francis Boys . [ 3 ] [ 8 ] Handy wrote 320 scientific papers published in physical and theoretical chemistry journals. [ 1 ] [ 6 ] [ 9 ] Handy developed several methods in quantum chemistry and theoretical spectroscopy . His contributions have helped greatly to the understanding of: Handy was elected a Fellow of the Royal Society (FRS) in 1990 . [ 1 ] He was awarded the Leverhulme Medal in 2002 [ 2 ] and was a member of the International Academy of Quantum Molecular Science . [ 11 ] On 2 October 2012 Nicholas died after a brief battle with pancreatic cancer. [ 7 ]
https://en.wikipedia.org/wiki/Nicholas_C._Handy
Nicholas Donat Kazarinoff (August 12, 1929, Ann Arbor, Michigan – November 21, 1991, Albuquerque, New Mexico ) was an American mathematician, specializing in differential equations. [ 1 ] In 1988 he was elected a Fellow of the American Association for the Advancement of Science (AAAS). [ 2 ] Kazarinoff grew up in Ann Arbor, Michigan, and went to college in his hometown at the University of Michigan . There he graduated with a B.S. in 1950 and an M.S. in 1951. He graduated in 1954 with a Ph.D. in mathematics from the University of Wisconsin–Madison . His Ph.D. thesis Asymptotic Forms for the Whitaker Functions of Large Complex Order m was supervised by Rudolf Ernest Langer . [ 3 ] In the mathematics department of Purdue University , Kazarinoff was from 1953 to 1955 an instructor and from 1955 to 1956 an assistant professor. At the University of Michigan, he was from 1956 to 1960 an assistant professor, from 1960 to 1964 an associate professor, and from 1964 to 1971 a full professor. In 1971 he resigned from the University of Michigan to become the chair of the mathematics department at the University of Buffalo (also known as SUNY Buffalo or the State University of New York, Buffalo) . There he was the Martin Professor of Mathematics from 1972 until his death in 1991. He died in Albuquerque when he was a visiting professor at the University of New Mexico , [ 1 ] where he was also a visiting professor in 1985. He also held visiting appointments at the University of Wisconsin–Madison's Army Mathematics Research Center (AMRC) (1958–1960), at Rome's Consiglio Nazionale delle Ricerche, CNR (1978 and 1980), and at Beijing University of Technology (1987). At Moscow's Steklov Institute of Mathematics , he was an exchange professor for the academic year 1960–1961 and again in the spring semester of 1965. [ 4 ] Kazarinoff's research focused mainly on differential equations. [ 4 ] His speciality was partial differential equations applied to reaction-diffusion systems. [ 5 ] His research on differential equations included fluid dynamics and dynamical systems. He also did research on the geometry of convex sets, the geometry of theta series, and iteration of real-valued and complex-valued maps. [ 6 ] He was the author or co-author of more than 80 research articles and monographs. [ 1 ] After his death, the University of Michigan established the Nicholas D. Kazarinoff Collegiate Professorship of Complex Systems, Mathematics, and Physics. [ 5 ] Kazarinoff dedicated his book Geometric Analysis to his father, Donat Konstantinovich Kazarinoff (1892–1957), who taught mathematics and engineering at the University of Michigan for 35 years (with 37 years of affiliation and 2 years of academic leave). [ 7 ] [ 8 ] Theorem: Let Ⲧ be a tetrahedron and let P be a point belonging to T. Let the distances from P to the vertices and to the faces of Ⲧ be denoted by R i and r i , respectively, for i = 1,2,3,4. Then: For any tetrahedron Ⲧ whose circumcenter is not an exterior point , Σ R i / Σ r i > 2 √ 2 and 2 √ 2 is the greatest lower bound . According to László Fejes Tóth , [ 9 ] D. K. Kazarinoff stated the inequality but never published his proof, perhaps because he thought that his proof was not simple enough. However, shortly before his death, D. K. Kazarinoff provided a simple proof of the Erdős-Mordell inequality for triangles and gave a generalization to three dimensions. Nicholas D. Kazarinoff used the work of his father as a basis for a proof of D. K. Kazarinoff's inequality for tetrahedra. [ 10 ] In July 1948, Kazarinoff married Margaret Louise Koning. They had five sons and a daughter. Upon his death in 1991 at age 62, he was survived by his widow, their six children, and eight grandchildren. He was an active member of the Unitarian Universalist Church of Buffalo and served on the church's finance committee. [ 1 ]
https://en.wikipedia.org/wiki/Nicholas_D._Kazarinoff
Nikolai Timofeevich Beliaev or Nicholas Timothy Belaiew (26 June 1878 – 5 November 1955) was a Russian metallurgist. He was famous for his studies on Damascus steel and the idea of crystallization in metals and the production of Widmanstatten structures . He also wrote on the history of steel making. Beliaev was born in St. Petersburg to General T. M. Beliaev and Maria Nikolayevna Septjurina. He was educated at Mikhailovskaya Artilleriiskaya Academy and was trained under Dmitry Konstantinovich Chernov and Henry Le Chatelier . He became a professor of metallurgy in 1909. During World War I he was wounded and he was sent to England in 1915. He received a Bessemer Gold Medal in 1937 from the British Institute of Steel and Iron in London. [ 1 ] A major contribution was on the studies of crystal structure in steels both man-made and of meteoric origin and examined their mechanical properties. [ 2 ] [ 3 ] [ 4 ] He also took an interest in Icelandic research. [ 5 ]
https://en.wikipedia.org/wiki/Nicholas_Timothy_Belaiew
Nicholas J. Turro (May 18, 1938 – November 24, 2012) was an American chemist , Wm. P. Schweitzer Professor of Chemistry at Columbia University . He was a world renowned organic chemist and leading world expert on organic photochemistry. He was the recipient of the 2011 Arthur C. Cope Award in Organic Chemistry , given annually "to recognize outstanding achievement in the field of organic chemistry, the significance of which has become apparent within the five years preceding the year in which the award will be considered." [ 2 ] [ 3 ] [ 4 ] [ 5 ] He was also the recipient of the 2000 Willard Gibbs Award , which recognizes "eminent chemists who...have brought to the world developments that enable everyone to live more comfortably and to understand this world better." [ 6 ] He received his B.A. degree summa cum laude from Wesleyan University in 1960. He attended graduate school at Caltech where he received his Ph.D. degree with George S. Hammond in 1963. Following a postdoctoral year at Harvard with P. D. Bartlett , he joined the faculty at Columbia University where he was the Wm. P. Schweitzer Professor of Chemistry. [ 7 ] Although he worked in many areas of chemistry, he was most well known for his work in photochemistry and spectroscopy, which he applied to studies involving small molecules in solution, interfaces, thin films, polymers, biological systems including DNA and carbohydrates, nanomaterials, supramolecular and "super-duper" molecular systems. [ 8 ] His success in these areas is evident by his co-authorship of over 1000 papers. His expertise in photochemistry, spectroscopy and organic chemistry lead to a large network of international collaborators, including Fortune 500 companies such as Procter and Gamble . He authored the influential books Molecular Photochemistry published in 1965, considered the "bible" of the field for several generations by organic photochemists , and Modern Molecular Photochemistry published in 1978. The latter was comprehensively revised as Principles of Molecular Photochemistry: An Introduction in 2008 and later as Modern Molecular Photochemistry of Organic Molecules in 2010 both of which were co-authored with V. Ramamurthy at University of Miami and J.C. Scaiano at the University of Ottawa . Turro has been selected as one of the most highly cited chemists for the past two decades, and has published over 900 research papers. [ 9 ] He was a member of both the National Academy of Sciences and the American Academy of Arts and Sciences . [ 10 ]
https://en.wikipedia.org/wiki/Nicholas_Turro
The Nicholas reaction is an organic reaction where a dicobalt octacarbonyl -stabilized propargylic cation is reacted with a nucleophile . Oxidative demetallation gives the desired alkylated alkyne . [ 1 ] [ 2 ] It is named after Kenneth M. Nicholas . Several reviews have been published. [ 3 ] [ 4 ] The addition of dicobalt octacarbonyl to the alkyne of propargylic ether ( 1 ) gives the dicobalt intermediate 2 . Reaction with tetrafluoroboric acid or a Lewis acid gives the key dicobalt octacarbonyl-stabilized propargylic cation ( 3a and 3b ). Addition of a nucleophile followed by a mild oxidation gives the substituted alkyne ( 5 ). The likely reaction intermediate in the process, [(propargylium)Co 2 (CO) 6 ] + cation 3 , possesses considerable stability. It was, in fact, possible to observe these cations by 1 H-NMR at 10 °C when generated using d - trifluoroacetic acid . [ 2 ] Later, Richard E. Connor and Nicholas [ 5 ] were able to isolate salts of such cations 3 as stable, dark red solids by treatment of the Co 2 (CO) 6 -complexed propargyl alcohols with excess fluoroantimonic acid or tetrafluoroboric acid etherate. The reason that these complexes are so remarkably stable is due to significant delocalization of the cationic charge onto the Co 2 (CO) 6 moiety. Experimental evidence for the charge delocalization includes an increase in the IR absorption frequencies of the carbon–oxygen bonds of the cobalt–carbonyl in the cationic intermediates compared with those in the parent alcohols. Also, when the cation is formed, the orbital hybridisation of the central carbon changes from sp 3 to sp 2 . This causes the atoms to exhibit a trigonal–planar arrangement and shortens the covalent bonds around the central carbon in the cation due to the increase in s character. [ 4 ]
https://en.wikipedia.org/wiki/Nicholas_reaction
Nick translation [ 1 ] (or head translation ), developed in 1977 by Peter Rigby and Paul Berg, is a tagging technique in molecular biology in which DNA polymerase I is used to replace some of the nucleotides of a DNA sequence with their labeled analogues, creating a tagged DNA sequence which can be used as a probe in fluorescent in situ hybridization (FISH) or blotting techniques. It can also be used for radiolabeling . [ 2 ] This process is called "nick translation" because the DNA to be processed is treated with DNAase to produce single-stranded "nicks", where one of the strands is missing nucleotides. This is followed by replacement in nicked sites by DNA polymerase I , which removes nucleotides from the 3' (downstream) end of a nick with its 3'-5' endonuclease activity and adds new, labeled dNTPs from the medium to the 5' end of the nick, moving the nick downstream in the process. [ 3 ] To radioactively label a DNA fragment for use as a probe in blotting procedures, one of the incorporated nucleotides provided in the reaction is radiolabeled in the alpha phosphate position, often using phosphorus-32 . Similarly, a fluorophore can be attached instead for fluorescent labelling, or an antigen for immunodetection. When DNA polymerase I eventually detaches from the DNA, it leaves another nick in the phosphate backbone. The nick has "translated" some distance depending on the processivity of the polymerase. This nick could be sealed by DNA ligase , or its 3' hydroxyl group could serve as the template for further DNA polymerase I activity. Proprietary enzyme mixes are available commercially to perform all steps in the procedure in a single incubation. Nick translation could cause double-stranded DNA breaks, if DNA polymerase I encounters another nick on the opposite strand, resulting in two shorter fragments. This does not influence the performance of the labelled probe in in-situ hybridization. This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nick_translation
Nickel(II) chromate (NiCrO 4 ) is an acid-soluble compound, red-brown in color, with high tolerances for heat. It and the ions that compose it have been linked to tumor formation and gene mutation, particularly to wildlife. [ 2 ] Nickel(II) chromate can be formed in the lab by heating a mixture of chromium(III) oxide and nickel oxide at between 700 °C and 800 °C under oxygen at 1000 atm pressure. It can be produced at 535 °C and 7.3 bar oxygen, but the reaction takes days to complete. [ 3 ] If the pressure is too low or temperature too high but above 660 °C, then the nickel chromium spinel NiCr 2 O 4 forms instead. [ 3 ] Karin Brandt also claimed to make nickel chromate using a hydrothermal technique. [ 4 ] [ 5 ] Precipitates of Ni 2+ ions with chromate produce a brown substance that contains water. [ 6 ] The structure of nickel chromate is the same as for chromium vanadate, CrVO 4 . Crystals have an orthorhombic structure with unit cell sizes a = 5.482 Å, b = 8.237 Å, c = 6.147 Å. The cell volume is 277.6 Å 3 with four formula per unit cell. [ 5 ] [ 7 ] Nickel chromate is dark in colour, unlike most other chromates which are yellow. [ 3 ] The infrared spectrum of nickel chromate show two sets of absorption bands. The first includes lines at 925, 825, and 800 cm −1 due to Cr-O stretching, and the second has lines at 430, 395, 365 (very weak) due to Cr-O rock and bend and 310 cm −1 produced from Ni-O stretching. [ 8 ] When heated at lower oxygen pressure around 600 °C, nickel chromate decomposes to the nickel chromite spinel, nickel oxide and oxygen. [ 3 ] Nickel chromates can also crystallize with ligands. For instance, with 1,10-phenanthroline it can form triclinic olive-colored crystals of [Ni(1,10-phenanthroline)CrO 4 •3H 2 O]•H 2 O, orange crystals of Ni(1,10-phenanthroline) 3 Cr 2 O 7 •3H 2 O, and yellow powdered Ni(1,10-phenanthroline) 3 Cr 2 O 7 •8H 2 O. [ 6 ]
https://en.wikipedia.org/wiki/Nickel(II)_chromate
Nickel(II) iodide is an inorganic compound with the formula NiI 2 . This paramagnetic black solid dissolves readily in water to give bluish-green solutions, [ 1 ] from which crystallizes the aquo complex [Ni(H 2 O) 6 ]I 2 (image above). [ 2 ] This bluish-green colour is typical of hydrated nickel(II) compounds. Nickel iodides find some applications in homogeneous catalysis . The anhydrous material crystallizes in the CdCl 2 motif, featuring octahedral coordination geometry at each Ni(II) center. NiI 2 is prepared by dehydration of the pentahydrate. [ 3 ] NiI 2 readily hydrates, and the hydrated form can be prepared by dissolution of nickel oxide, hydroxide, or carbonate in hydroiodic acid . The anhydrous form can be produced by treating powdered nickel with iodine. [ 4 ] NiI 2 has some industrial applications as a catalyst in carbonylation reactions. [ 5 ] It is also has niche uses as a reagent in organic synthesis , especially in conjunction with samarium(II) iodide . [ 6 ] Like many nickel complexes, those derived from hydrated nickel iodide have been used in cross coupling. [ 7 ]
https://en.wikipedia.org/wiki/Nickel(II)_iodide
Nickel (II) nitrate is the inorganic compound Ni(NO 3 ) 2 or any hydrate thereof. In the hexahydrate, the nitrate anions are not bonded to nickel. Other hydrates have also been reported: Ni(NO 3 ) 2 . 9H 2 O, Ni(NO 3 ) 2 . 4H 2 O, and Ni(NO 3 ) 2 . 2H 2 O. [ 3 ] It is prepared by the reaction of nickel oxide with nitric acid: The anhydrous nickel nitrate is typically not prepared by heating the hydrates. Rather it is generated by the reaction of hydrates with dinitrogen pentoxide or of nickel carbonyl with dinitrogen tetroxide : [ 3 ] The hydrated nitrate is often used as a precursor to supported nickel catalysts. [ 3 ] Nickel(II) compounds with oxygenated ligands often feature octahedral coordination geometry. Two polymorphs of the tetrahydrate Ni(NO 3 ) 2 . 4H 2 O have been crystallized. In one the monodentate nitrate ligands are trans [ 4 ] while in the other they are cis. [ 5 ] Nickel(II) nitrate is primarily used in electrotyping and electroplating of metallic nickel. In heterogeneous catalysis, nickel(II) nitrate is used to impregnate alumina . Pyrolysis of the resulting material gives forms of Raney nickel and Urushibara nickel . [ 6 ] In homogeneous catalysis , the hexahydrate is a precatalyst for cross coupling reactions . [ 7 ]
https://en.wikipedia.org/wiki/Nickel(II)_nitrate
Nickel(II) oxide is the chemical compound with the formula NiO . It is the principal oxide of nickel . [ 4 ] It is classified as a basic metal oxide. Several million kilograms are produced annually of varying quality, mainly as an intermediate in the production of nickel alloys. [ 5 ] The mineralogical form of NiO , bunsenite , is very rare. Other nickel(III) oxides have been claimed, for example: Ni 2 O 3 and NiO 2 , but remain unproven. [ 4 ] NiO can be prepared by multiple methods. Upon heating above 400 °C, nickel powder reacts with oxygen to give NiO . In some commercial processes, green nickel oxide is made by heating a mixture of nickel powder and water at 1000 °C; the rate for this reaction can be increased by the addition of NiO . [ 6 ] The simplest and most successful method of preparation is through pyrolysis of nickel(II) compounds such as the hydroxide, nitrate , and carbonate , which yield a light green powder. [ 4 ] Synthesis from the elements by heating the metal in oxygen can yield grey to black powders which indicates nonstoichiometry . [ 4 ] NiO adopts the NaCl structure, with octahedral Ni 2+ and O 2− sites. The conceptually simple structure is commonly known as the rock salt structure. Like many other binary metal oxides, NiO is often non-stoichiometric, meaning that the Ni:O ratio deviates from 1:1. In nickel oxide, this non-stoichiometry is accompanied by a color change, with the stoichiometrically correct NiO being green and the non-stoichiometric NiO being black. NiO has a variety of specialized applications and generally, applications distinguish between "chemical grade", which is relatively pure material for specialty applications, and "metallurgical grade", which is mainly used for the production of alloys. It is used in the ceramic industry to make frits, ferrites, and porcelain glazes. The sintered oxide is used to produce nickel steel alloys. Charles Édouard Guillaume won the 1920 Nobel Prize in Physics for his work on nickel steel alloys which he called invar and elinvar . NiO is a commonly used hole transport material in thin film solar cells. [ 7 ] It was also a component in the nickel-iron battery , also known as the Edison Battery, and is a component in fuel cells . It is the precursor to many nickel salts, for use as specialty chemicals and catalysts. More recently, NiO was used to make the NiCd rechargeable batteries found in many electronic devices until the development of the environmentally superior NiMH battery. [ 6 ] NiO an anodic electrochromic material, have been widely studied as counter electrodes with tungsten oxide, cathodic electrochromic material, in complementary electrochromic devices . About 4000 tons of chemical grade NiO are produced annually. [ 5 ] Black NiO is the precursor to nickel salts, which arise by treatment with mineral acids. NiO is a versatile hydrogenation catalyst. Heating nickel oxide with either hydrogen, carbon, or carbon monoxide reduces it to metallic nickel. It combines with the oxides of sodium and potassium at high temperatures (>700 °C) to form the corresponding nickelate . [ 6 ] NiO is useful for illustrating the failure of density functional theory (using functionals based on the local-density approximation ) and Hartree–Fock theory to account for the strong correlation . The term strong correlation refers to behavior of electrons in solids that is not well described (often not even in a qualitatively correct manner) by simple one-electron theories such as the local-density approximation (LDA) or Hartree–Fock theory. [ 8 ] [ citation needed ] For instance, the seemingly simple material NiO has a partially filled 3d-band (the Ni atom has 8 of 10 possible 3d-electrons) and therefore would be expected to be a good conductor. However, strong Coulomb repulsion (a correlation effect) between d-electrons makes NiO instead a wide band gap Mott insulator . Thus, NiO has an electronic structure that is neither simply free-electron-like nor completely ionic, but a mixture of both. [ 9 ] [ 10 ] Long-term inhalation of NiO is damaging to the lungs, causing lesions and in some cases cancer. [ 11 ] The calculated half-life of dissolution of NiO in the blood is more than 90 days. [ 12 ] NiO has a long retention half-time in the lungs; after administration to rodents, it persisted in the lungs for more than 3 months. [ 13 ] [ 12 ] Nickel oxide is classified as a human carcinogen [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] based on increased respiratory cancer risks observed in epidemiological studies of sulfidic ore refinery workers. [ 20 ] In a 2-year National Toxicology Program green NiO inhalation study, some evidence of carcinogenicity in F344/N rats but equivocal evidence in female B6C3F1 mice was observed; there was no evidence of carcinogenicity in male B6C3F1 mice. [ 14 ] Chronic inflammation without fibrosis was observed in the 2-year studies.
https://en.wikipedia.org/wiki/Nickel(II)_oxide
Nickel(II) perchlorate is a collection of inorganic compounds with the chemical formula of Ni(ClO 4 ) 2 (H 2 O) x . Its colors of these solids vary with the degree of hydration. For example, the hydrate forms cyan crystals, the pentahydrate forms green crystals, but the hexahydrate (Ni(ClO 4 ) 2 ·6H 2 O) forms blue crystals. Nickel(II) perchlorate hexahydrate is highly soluble in water and soluble in some polar organic solvents . [ 3 ] Aqueous solutions of nickel(II) perchlorate can be obtained by treating nickel(II) hydroxide , nickel(II) chloride or nickel(II) carbonate with perchloric acid . Two hydrates have been characterized by X-ray crystallography : the hexahydrate [ 4 ] [ 5 ] and the octahydrate. [ 6 ] Several other hydrates are mentioned including the pentahydrate, which is claimed to crystallize at room temperature, the nonahydrate, which is claimed to crystallize at −21.3 °C, a tetrahydrate, and a monohydrate. [ 7 ] The yellow anhydrous product is obtained by treating nickel(II) chloride with chlorine trioxide. As deduced by X-ray crystallography , Ni resides in a distorted octahedral environment and the perchlorate ligands bridge between the Ni(II) centers. [ 8 ] Nickel(II) perchlorates has few practical uses.
https://en.wikipedia.org/wiki/Nickel(II)_perchlorate
Nickel (III) oxide is the inorganic compound with the formula Ni 2 O 3 . It is not well characterized, [ 1 ] and is sometimes referred to as black nickel oxide . Traces of Ni 2 O 3 on nickel surfaces have been mentioned. [ 2 ] [ 3 ] Nickel (III) oxide has been studied theoretically since the early 1930s, [ 4 ] supporting its unstable nature at standard temperatures. A nanostructured pure phase of the material was synthesized and stabilized for the first time in 2015 from the reaction of nickel(II) nitrate with sodium hypochlorite and characterized using powder X-ray diffraction and electron microscopy. [ 5 ] This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nickel(III)_oxide
The Nickel Directive was a European Union directive regulating the use of nickel in jewellery and other products that come into contact with the skin. Since 1 June 2009, it has been subsumed into the REACH Regulation , specifically item 27 of Annex XVII to that regulation. Nevertheless, the term Nickel Directive is still used to refer to the restrictions on nickel usage and the prescribed test method for quantifying nickel release from products EN 1811 . Allergy to nickel is a common cause of contact dermatitis , with roughly 10% of the population in Western Europe and North America being sensitive to nickel. [ 1 ] [ 2 ] [ 3 ] Initial sensitisation frequently occurs from jewellery such as ear studs and other body piercings, [ 3 ] and nickel allergy is more prevalent among women than men. [ 1 ] [ 4 ] Once sensitised, an individual can develop contact dermatitis from shorter term contact with nickel-containing products: [ 4 ] this is a particular problem given the use of nickel in coinage, [ 5 ] [ 6 ] such as the European one- and two-euro coins [ 7 ] and the Canadian five-cent piece . This led to moves by two European countries to prevent the initial sensitisation of jewellery wearers by limiting the use of nickel in piercing studs and other products which are in prolonged contact with the skin, and then to the European Union Nickel Directive in 1994. The Nickel Directive imposes limits on the amount of nickel that may be released from jewellery and other products intended to come into direct and prolonged contact with the skin. These limits, known as migration limits, are: Nickel release is measured by a test method known as EN 1811, which involves placing the object in an artificial sweat solution for one week, then measuring nickel by atomic absorption spectroscopy or any other appropriate technique (e.g. ICP-MS ). Other, equivalent test methods may also be accepted. [ 8 ] Wear and corrosion can be simulated by a method known as EN 12472.
https://en.wikipedia.org/wiki/Nickel_Directive
Nickel allergy is any of several allergic conditions provoked by exposure to the chemical element nickel . Nickel allergy often takes the form of nickel allergic contact dermatitis ( Ni-ACD ), a form of allergic contact dermatitis (ACD). Ni-ACD typically causes a rash that is red and itchy and that may be bumpy or scaly. The main treatment for it is avoiding contact with nickel-releasing metals, such as inexpensive jewelry. Another form of nickel allergy is a systemic form: systemic nickel allergy syndrome ( SNAS ) can mimic some of the symptoms of irritable bowel syndrome (IBS) and also has a dermatologic component. [ 1 ] The most common sign of nickel allergy is inflammation of the skin at an area that comes into regular contact with nickel. [ 2 ] This often takes the form of a reddened patch of skin with raised bumps ( papules ) or small blisters ( vesicles ), and edema . [ 2 ] People with chronic dermatitis tend to have dry, scaly, and cracked skin at the site of contact. [ 2 ] These sites of inflammation (called "primary eruptions") can occur anywhere on the skin that contacts nickel, but are most common on the hands, face, or anywhere that contacts metal objects such as jewelry or metal clothes buttons. [ 2 ] Particularly high levels of nickel exposure can cause irritated patches of skin to appear at other sites on the body (called "secondary eruptions"). These typically occur as blistering rashes on the hands, eyelids, and at the inside of flexing joints (inside the elbow, back of the knee, etc.). [ 2 ] Ingestion of nickel may cause a systemic reaction, which can result in generalized inflammation of the skin across the body, small blisters in the hands, irritation inside the flexing joints (flexural eczema), and redness and irritation of both buttocks . [ 2 ] Systemic contact dermatitis (SCD) is defined as a dermatitis occurring in an epi-cutaneously contact-sensitized person when exposed to haptens systemically such as orally , per rectum , intravesically , transcutaneously , intrauterinely , intravenously , or by inhalation . [ 3 ] The pathophysiology of systemic nickel allergy syndrome (SNAS) is not well understood. The clinical course is determined by an immunological interplay between two types of T cells (Th1 and Th2 responses). SCD is often considered a subset of SNAS, but with only skin manifestations. [ 4 ] SNAS presents with an array of symptoms ranging from respiratory to generalized skin rash to gastrointestinal symptoms. [ 5 ] The gastrointestinal symptoms may mimic those of irritable bowel syndrome . [ 1 ] A meta review evaluating SNAS found that 1% of patients sensitized to nickel reacted to the nickel content of a 'normal' diet, and with increasing doses of nickel more individuals reacted [ 6 ] SNAS is a multilayered immunological response demonstrating variance between individuals and doses of nickel exposure. Nickel is both naturally abundant – it is the fifth most common element on earth – and widely used in industry and commercial goods. [ 2 ] Workplace nickel exposure is common in many industries, and the performance of normal work tasks can result in nickel skin levels sufficient to elicit dermatitis. [ 2 ] Within the workplace, individuals may be exposed to significant amounts of nickel, airborne from the combustion of fossil fuels or from contact with tools that are nickel plated . [ 7 ] Historically, workplaces where prolonged contact with soluble nickel has been high have shown high risks for allergic contact nickel dermatitis. For example, nickel dermatitis was common in the past among nickel platers. [ 8 ] Outbreaks of nickel allergy from consumer goods have been documented throughout the 20th century, with jewelry, stocking suspenders, and metallic buttons on blue jeans each resulting in dermatitis at the point of contact. [ 2 ] Nickel can also be present in food and drinking water; ingestion of increased nickel is not associated with systemic allergic disease, but is associated with flare-ups of dermatitis or aggravation of vesicular hand eczema. [ 2 ] Similarly, aggravation of dermatitis has been reported in response to nickel-containing surgical implants or dental gear. [ 2 ] The risk of an object eliciting nickel allergy is linked to the amount of nickel released by its surface (and not to its total nickel content). [ 2 ] Suspected objects can be screened by wiping the surface with a 1% dimethylglyoxime solution that turns pink if more than 0.5 μg/cm 2 per week is released by the surface. [ 2 ] Various methods exist to test the skin or nails for nickel exposure, typically relying on wiping the skin, then quantifying the nickel on the wipe via mass spectrometry . [ 2 ] Dietary nickel exposure may come from high-nickel foods, possibly canned food (via the packaging), possibly stainless steel cookware (whereas some grades of stainless steel contain more nickel than others), or plumbing (especially the first water run from the tap in the morning). [ 1 ] Nickel allergy results in a skin response after the skin comes in contact with an item that releases a large amount of nickel from its surface. It is commonly associated with nickel-containing belt buckles coming into prolonged contact with the skin. [ 9 ] [ 10 ] [ 11 ] The skin reaction can occur at the site of contact, or sometimes spread beyond to the rest of the body. Free (released) nickel that is able to penetrate the skin is taken up by scavenger ( dendritic ) cells and then presented to the immune system T-Cells. With each subsequent exposure to nickel these T cells become stimulated and duplicate themselves. With enough exposure to nickel, the amassing clones of T-cells reach "threshold" and the skin develops a rash . The rash can appear as acute, subacute, or chronic eczema-like skin patches, primarily at the site of contact with the nickel (e.g., earlobe from nickel earrings). From the time of exposure, the rash usually appears within 12–120 hours and can last for 3–4 weeks or for the continued duration of nickel contact/exposure. [ 9 ] Three simultaneous conditions must occur to trigger Ni-ACD: The pathophysiology is divided into induction elicitation phases . Induction is the critical phase (immunological event) when skin contact to nickel results in antigen presentation to the T cells, and T cell duplication (cloning) occurs. The metal cation Ni ++ is a low molecular weight hapten that easily penetrates the stratum corneum (top layer of skin). Nickel then binds to skin protein carriers creating an antigenic epitope . [ 13 ] The determining factor in sensitization is exposure of significant amounts of "free nickel". [ 14 ] This is important because different metal alloys release different amounts of free nickel. The antigenic epitope is collected by dermal dendritic cells and Langerhans cells , the antigen-presenting cells (APC) of the skin, and undergo maturation and migration to regional lymph nodes . The complex is predominantly expressed on major histocompatibility complex ( MHC) II , which activates and clonally expands naive CD4+ T cells . [ 15 ] Upon re-exposure these now primed T cells will be activated and massively recruited to the skin, resulting in the elicitation phase and the clinical presentation of Ni-ACD. Although ACD has been considered a Th1 predominate process, recent studies highlight a more complex picture. In Ni-ACD other cells are involved including: Th17 , Th22, Th1/ IFN and the innate immune responses consistent with toll-like receptor 4 . [ 16 ] [ 17 ] Nickel has a wide utility of application in manufactured metals because it is both strong and malleable, leading to ubiquitous presence and the potential for consumers to be in contact with it daily. However, for those who have the rash of allergic contact dermatitis (ACD) due to a nickel allergy, it can be a challenge to avoid. Foods, common kitchen utensils, cell phones, jewelry, and many other items may contain nickel and be a source of irritation due to the allergic reaction caused by the absorption of free released nickel through direct and prolonged contact. The most appropriate measure for nickel-allergic persons is to prevent contact with the allergen. In 2011, researchers showed that applying a thin layer of glycerine emollient containing nanoparticles of either calcium carbonate or calcium phosphate on an isolated piece of pig skin ( in vitro ) and on the skin of mice ( in vivo ) prevents the penetration of nickel ions into the skin. The nanoparticles capture nickel ions by cation exchange, and remain on the surface of the skin, allowing them to be removed by simple washing with water. Approximately 11-fold fewer nanoparticles by mass are required to achieve the same efficacy as the chelating agent ethylenediamine tetraacetic acid . Using nanoparticles with diameters smaller than 500 nm in topical creams may be an effective way to limit the exposure to metal ions that can cause skin irritation'. [ 18 ] Pre-emptive avoidance strategies (PEAS) might ultimately lower the sensitization rates of children who would develop ACD [ 19 ] It is theorized that prevention of exposure to nickel early on could reduce the number of those that are sensitive to nickel by one-quarter to one-third. Identification of the many sources of nickel is vital to understanding the nickel sensitization story, food like chocolate and fish, zippers, buttons, cell phones and even orthodontic braces and eyeglass frames might contain nickel. Items that contain sentimental value (heirlooms, wedding rings) could be treated with an enamel or rhodium plating . [ 20 ] The Dermatitis Academy has created an educational website to provide more information about nickel, including information about prevention, exposure, sources, and general information about nickel allergy. These resources provide guidance in a prevention initiative for children worldwide. Prevention of SNAS includes modifying dietary choices to avoid certain foods that are higher in nickel than others. [ 1 ] Nickel allergy is typically diagnosed by patch testing – applying a patch with 2.5% (in North America) or 5% (in Europe) nickel sulfate to the upper back and looking for irritation on the skin. [ 2 ] As with other causes of allergic contact dermatitis, patches containing several common allergens are typically applied to the back for 48 hours, removed, then the spots examined for allergic reactions 2 to 5 days later. [ 21 ] SNAS can often mimic IBS [ 1 ] and may be more common than is widely appreciated. [ 1 ] It therefore should be considered as a differential diagnosis item when a doctor is considering a diagnosis of IBS, [ 1 ] and nickel allergy testing is advisable as a means to exclude or confirm SNAS. [ 1 ] Even before such testing, some differentiating factors in the medical history are if certain foods prompt the symptoms (for example, peanuts or shellfish), [ 1 ] whereas IBS is not specific to those foods. [ 1 ] Once a nickel allergy is detected, the best treatment is avoidance of nickel-releasing items. The top 13 categories that contain nickel include beauty accessories, eyeglasses, money, cigarettes, clothes, kitchen and household, electronics and office equipment, metal utensils, aliment, jewelry, batteries, orthodontic and dental appliances, and medical equipment. [ 22 ] Other than strict avoidance of items that release free nickel, there are other treatment options for reduction of exposure. The first step is to limit friction between skin and metallic items. Susceptible people may try to limit sweating while wearing nickel items, to reduce nickel release and thus decrease chances for developing sensitization or allergy. Another option is to shield electronics, metal devices, and tools with fabric, plastic, or acrylic coverings. [ 22 ] There are dimethylglyoxime test kits that can be very helpful to check for nickel release from items prior to purchasing. [ 23 ] The American Contact Dermatitis Society 'find a provider' resource can help identify clinicians with training in providing guidance lists of safe items. [ 24 ] In addition to avoidance, healthcare providers may prescribe additional creams or medications to help relieve the skin reaction. Nickel allergy is the most common contact allergy in industrialized countries, affecting around 8% to 19% of adults and 8% to 10% of children. [ 2 ] Women are affected 4–10 times as frequently as men. [ 2 ] Nickel allergy is estimated to affect 4% of men and 16% of women worldwide. [ 25 ] In southern European countries, nickel allergy is more common than in northern countries, 16% versus 10%. The results are similar in the USA. [ 26 ] As nickel can be harmful to skin, its use in daily products must be regulated. A safety directive has been in place in Europe since 2004. Denmark in 1980, and then shortly after the European Union (EU), enacted legislation that limited the amount of free nickel in consumer products that come in contact with the skin. This resulted in significantly decreased rates of sensitization among Danish children 0 to 18 years of age from 24.8% to 9.2% between 1985 and 1998, with similar reductions in sensitization throughout the EU. [ 27 ] [ 28 ] No such directive exists in the United States, but efforts are under way to mandate safe use guidelines for nickel. In August 2015, the American Academy of Dermatology (AAD) adopted a nickel safety position paper. [ 29 ] The exact prevalence of Ni-ACD in the general population in the US is largely unknown. However, current estimates gauge that roughly 2.5 million US adults and 250,000 children have a nickel allergy, which costs an estimated $5.7 billion per year for treatment of symptoms. [ 30 ] Loma Linda University , Nickel Allergy Alliance, and Dermatitis Academy created the first open access self-reported patient registry to record nickel allergy prevalence data in the US.[ref 23] [ full citation needed ] In the 17th century, copper miners in Saxony , Germany, began to experience irritation caused by a "dark red ore". Since the substance, which would later be called nickel, led to many ailments, they believed it to be protected by " goblins ", and called it "Goblin's Copper". [ 31 ] Josef Jadassohn described the first case of metal contact dermatitis in 1895, to a mercury -based therapeutic cream, and confirmed the cause by epi-cutaneous patch testing. [ 32 ] In the next century nickel began to be mass-produced for jewelry worldwide due to its cheap cost, resistance to corrosion and high supply. In 1979 a large comprehensive study of healthy US volunteers found that 9% had been unknowingly sensitized to nickel. [ 33 ] As of 2008 [update] , that number has tripled. [ 34 ] Most importantly, nickel allergy among children is increasing, with an estimated 250,000 children sensitized to nickel. [ 35 ] Published literature shows an exponential increase in reported nickel allergy cases. [ 36 ] The North American Contact Dermatitis Group (NACDG) patch tested 5,085 adults, presenting with eczema-like symptoms, showing 19.5% had a positive reaction to nickel. [ 37 ] Nickel allergy is also more prevalent in women (17.1%) than men (3%), possibly due to cultural norms related to jewelry and ear piercings and therefore increased exposure to nickel. [ 38 ] In order to investigate the current prevalence of nickel, Loma Linda University , Nickel Allergy Alliance, and Dermatitis Academy, [ 39 ] are conducting a self-reporting nickel allergy-dermatitis survey. [ 40 ]
https://en.wikipedia.org/wiki/Nickel_allergy
Nickel aluminide refers to either of two widely used intermetallic compounds, Ni 3 Al or NiAl, but the term is sometimes used to refer to any nickel–aluminium alloy. These alloys are widely used because of their high strength even at high temperature, low density, corrosion resistance, and ease of production. [ 1 ] Ni 3 Al is of specific interest as a precipitate in nickel-based superalloys , where it is called the γ' (gamma prime) phase. It gives these alloys high strength and creep resistance up to 0.7–0.8 of its melting temperature. [ 1 ] [ 2 ] Meanwhile, NiAl displays excellent properties such as lower density and higher melting temperature than those of Ni 3 Al, and good thermal conductivity and oxidation resistance. [ 2 ] These properties make it attractive for special high-temperature applications like coatings on blades in gas turbines and jet engines . However, both these alloys have the disadvantage of being quite brittle at room temperature, with Ni 3 Al remaining brittle at high temperatures as well. [ 1 ] To address this problem, has been shown that Ni 3 Al can be made ductile when manufactured in single-crystal form rather than in polycrystalline form. [ 3 ] An important disadvantage of polycrystalline Ni 3 Al-based alloys are their room-temperature and high-temperature brittleness, which interferes with potential structural applications. This brittleness is generally attributed to the inability of dislocations to move in the highly ordered lattices. [ 5 ] The introduction of small amount of boron can drastically increase the ductility by suppressing intergranular fracture. [ 6 ] Ni-based superalloys derive their strength from the formation of γ' precipitates (Ni 3 Al) in the γ phase (Ni) which strengthen the alloys through precipitation hardening . In these alloys the volume fraction of the γ' precipitates is as high as 80%. [ 7 ] Because of this high volume fraction, the evolution of these γ' precipitates during the alloys' life cycles is important: a major concern is the coarsening of these γ' precipitates at high temperature (800 to 1000 °C), which greatly reduces the alloys' strength. [ 7 ] This coarsening is due to the balance between interfacial and elastic energy in the γ + γ' phase and is generally inevitable over long durations of time. [ 7 ] This coarsening problem is addressed by introducing other elements such as Fe, Cr and Mo, which generate multiphase configurations that can significantly increase the creep resistance. [ 8 ] This creep resistance is attributed to the formation of inhomogeneous precipitate Cr 4.6 MoNi 2.1 , which pins dislocations and prevents further coarsening of the γ' phase. [ 8 ] The addition of Fe and Cr also drastically increases the weldability of the alloy. [ 8 ] Despite its beneficial properties, NiAl generally suffers from two factors: very high brittleness at low temperatures (<330 °C (626 °F)) and rapid loss of strength for temperatures higher than 550 °C (1,022 °F). [ 9 ] The brittleness is attributed to both the high energy of anti-phase boundaries as well as high atomic order along grain boundaries. [ 9 ] Similar to that of Ni 3 Al-based alloys these issues are generally addressed via the integration of other elements. Attempted elements can be broken into three groups depending on their influence of microstructure: Some of the more successful elements have been shown to be Fe, Co and Cr which drastically increase room temperature ductility as well as hot workability. [ 10 ] This increase is due to the formation of γ phase which modifies the β phase grains. [ 10 ] Alloying with Fe, Ga and Mo has also been shown to drastically improve room temperature ductility as well. [ 11 ] Most recently, refracturing metals such as Cr, W and Mo have been added and resulted in not only increases in room temperature ductility but also increases in strength and fracture toughness at high temperatures. [ 12 ] This is due to the formation of unique microstructures such as the eutectic alloy Ni 45.5 Al 9 Mo and α-Cr inclusions that contribute to solid solution hardening. [ 12 ] It is even being shown that these complex alloys (Ni 42 Al 51 Cr 3 Mo 4 ) have the potential to be fabricated via additive manufacturing processes such as selective laser manufacturing , vastly increasing the potential applications for these alloys. [ 12 ] In nickel-based superalloys, regions of Ni 3 Al (called γ' phase) precipitate out of the nickel-rich matrix (called γ phase) to give high strength and creep resistance. Many alloy formulations are available and they usually include other elements, such as chromium, molybdenum, and iron, in order to improve various properties. An alloy of Ni 3 Al, known as IC-221M, is made up of nickel aluminide combined with several other metals including chromium , molybdenum , zirconium and boron . Adding boron increases the ductility of the alloy by positively altering the grain boundary chemistry and promoting grain refinement. The Hall-Petch parameters for this material were σ o = 163 MPa and k y = 8.2 MPaˑcm 1/2 . [ 13 ] Boron increases the hardness of bulk Ni 3 Al by a similar mechanism. This alloy is extremely strong for its weight, five times stronger than common SAE 304 stainless steel . Unlike most alloys, IC-221M increases in strength from room temperature up to 800 °C (1,470 °F). The alloy is very resistant to heat and corrosion , and finds use in heat-treating furnaces and other applications where its longer lifespan and reduced corrosion give it an advantage over stainless steel . [ 14 ] It has been found that the microstructure of this alloy includes Ni 5 Zr eutectic phase and therefore solution treatment is effective for hot working without cracking. [ 15 ]
https://en.wikipedia.org/wiki/Nickel_aluminide